Quick viewing(Text Mode)

Construction of Generic Test Environment for Embedded Systems

Construction of Generic Test Environment for Embedded Systems

Construction of generic test environment for embedded systems

MAGNUS DORMVIK GUSTAV LEESIK

Master of Science Thesis Stockholm, Sweden 2013

Construction of generic test environment for embedded systems

Magnus Dormvik Gustav Leesik

Master of Science Thesis MMK 2013:85 MDA 423 KTH Industrial Engineering and Management Machine Design SE-100 44 STOCKHOLM

Examensarbete MMK 2013:85 MDA 423

Konstruktion av generisk testmiljö för inbyggda system

Magnus Erik Leonard Dormvik Gustav Felix Leesik Godkänt Examinator Handledare 2013-11-11 Mats Hanson Sagar Behere Uppdragsgivare Kontaktperson Data Respons AB Stefan Stendahl Sammanfattning Data Respons AB utvecklar skräddarsydda inbyggda system. Systemen består av allt mellan modifierade datorer baserade på x86-arkitekturen till enklare mikrokontroller. Syftet med detta examensarbete var att utvärdera och utveckla metoder för att testa dessa system. Målet var att identifiera testmetoder/lösningar för att göra test av inbyggda system snabbare, mer träffsäkra och med bättre spårbarhet. Arbetet omfattade analys av befintliga hårdvarutester och testdokumentation, design av kompleterande testmetoder samt implementation av dessa. Analysarbetet innefattade hur Data Respons arbetar med tester av inbyggda system, från utveckling och montering till felanalys. Intervjuer gjordes, stöddokumentation och rutiner analyserades. Identifierade problem omarbetades till krav som användes för att utvärdera befintliga produkter samt behovet av skräddarsydda lösningar. Ett testexekveringsramverk utvecklades i ++ för att automatisera hårdvarutester genom att minimera manuella rutiner kring utförandet av test och dokumentation av test. En utvärdering av testmetoden Boundary Scan utfördes. Testmetoden möjliggör hårdvarutester av kretskort med hjälp av kretskortskomponenternas inbyggda testfunktioner. Metoden möjliggör testning av delar av kretskortet som normalt inte nås med någon annan testmetod. Boundary Scan gör det möjligt för utvecklare att isolera fel i hårdvaran utan att blanda in dess mjukvara. En utvärdering av testverktyget elektronisk last utfördes. En sådan kan emulera olika typer av elektriska laster, vilket används för att testa kretsar som levererar ström. Många fel i elektriska system kan härledas till en felande strömkrets. Att testa dessa är avgörande för att uppnå hög tillförlitlighet.

1 (135) 2 (135) Master of Science Thesis MMK 2013:85 MDA 423

Construction of generic test environment for embedded systems

Magnus Erik Leonard Dormvik Gustav Felix Leesik Approved Examiner Supervisor 2013-11-11 Mats Hanson Sagar Behere Commissioner Contact person Data Respons AB Stefan Stendahl

Abstract Data Respons AB develop custom embedded systems, ranging from modified PCs to micro controllers used to do specific tasks. The purpose of this master thesis is to analyse how to tests of embedded systems faster, more accurate, easier to reproduce and quality assured. The purpose is also to implement a solution to these needs as much and well as possible. The work covered how Data Respons work with tests of embedded systems, from development and assembly to fault analysis. It starts with a general analysis of improvements regarding tests. Interviews were held, support documentation and routines regarding test were examined. Identified problems and needs for improvements were reworked to requirements. Requirements were mapped to existing products and the need for custom made test software. The result was a solution package containing: A test execution framework. This test software is used to automate tests and reduce manual procedures and interactions with the system under test. The solution also dramatically reduces the work involved with logging the tests. An implementation of the test method Boundary scan. This tests method enables hardware tests on a physical layer of a printed circuit board (PCB) and enables testing of parts of the PCB unreachable with any other test method. Boundary scan also enables developers to identify if a fault is caused by hardware or software. An evaluation of electronic load. An electronic load is an equipment that can emulate the load of a circuit that produces electric power. Testing power circuits with the real load can be dangerous, noisy or inconvenient. Many failed systems have a root cause in a failed power circuit. Testing these are crucial for a high reliability.

3 (135) 4 (85) FOREWORD

This Master thesis was done for Machine Design at KTH (Royal Institute of Technology), Sweden. It was done at Data Respons AB, mostly at the Kista office, Sweden, during the period June 2011 to February 2012. We would like to thank: Jeanette Fridberg, manager of the Solutions department when this master thesis began, for valuable everyday guidance. Stefan Ohlson, our technical supervisor at Data Respons, for technical guidance on embedded systems. Sagar Behere, our supervisor at KTH, for tips on programming and organisational issues. Finally we would like to thank the whole Data Respons office in Kista for an inspiring environment, a challenging task and a good time.

Gustav Felix Leesik Magnus Erik Leonard Dormvik

Kista March 2012

5 (85) 6 (85) NOMENCLATURE

Abbreviations

International Organisation for AC Alternating Current ISO Standardisation AD Analog to Digital ISP In System Programming Software to handle BOMs and ARENA products JTAG Joint Test Access Group Kungliga Tekniska Högskolan ARM CPU architecture KTH (Royal Institute of Technology) a shell LGA Land Gate Array BGA Ball Gate Array MCU Micro Controller Unit BIST Burn In Self Test MOSFET type of power transistor BOM Bill Of Material MTBF Mean Time Between Failure BS Boundary Scan NI National Instruments Boundary Scan Description Original Equipment BSDL Language OEM Manufacturer CLI Command Line Interface OS CPU Central Processing Unit PCB Printed Circuit Board DA Digital to Analog PGA Pin Gate Array DC Direct Current PWM Pulse Width Modulation Preboot eXecution DFT Design For Test PXE Environment DOA Dead On Arrival QMS Quality Management System DUT Device Under Test RAM Random Access Memory Return to Manufacturer EMC Electro Magnetic Compliance RMA Authorisation EMI Electro Magnetic Interference SPI Serial Peripheral Interface SubVersioN, version handling ESD Electro Static Discharge SVN software Failuer Mode and Efftect FMEA Analysis SW SoftWare FMFI Fucking Magic Fixed Itself TAP Test Access port FTA Fault Tree Analysis TCK Test ClocK pin FW FirmWare TDI Test Data Input pin GUI Graphical User Interface TDO Test Data Output pin HDD Hard Disk Drive TEX Test Execution framework HW HardWare TMS Test Mode Select pin IC Integrated Circuit TRST Test ReSeT pin ICT In Circuit Testing USB Universal Serial Bus Institute of Electrical and IEEE Electronics Engineers PLM Product Lifecycle Management

7 (85) Terms

Board bring up The process of getting software to run on new hardware, i.e. making sure a printed circuit board behaves as specified Operations The department at Data Respons that assembles and repairs products, builds prototypes and handles logistics System integrations The department where system integration is done Solutions The in-house development department Services The consultant part of Data Respons Loop-back plug A device plug that connects sending pin to receiving pin

8 (85) TABLE OF CONTENTS

SAMMANFATTNING...... 1

ABSTRACT...... 3

FOREWORD...... 5

NOMENCLATURE...... 7

1 INTRODUCTION...... 11

1.1 Background...... 11 1.2 General goals ...... 11 1.3 Purpose...... 12 1.4 Disposition...... 12 1.5 References...... 12 1.6 Priority...... 12 1.7 Method...... 12

2 PROBLEM ANALYSIS...... 17

2.1 Business analysis...... 17 2.2 Summary interviews...... 17 2.3 Company-wide areas of improvements...... 23 2.4 Test's role in a greater perspective...... 23

3 TARGET AREA AND APPROACH...... 24

3.1 Top-down approach...... 24 3.2 Bottom-up approach...... 26 3.3 Advice for decision...... 27 3.4 Decision...... 27

4 TESTING METHODS OF EMBEDDED SYSTEMS...... 28

4.1 Testing methods...... 28 4.2 Areas of testing...... 29 4.3 Functional test...... 30 4.4 In Circuit Testing...... 31 4.5 X-ray...... 32

9 (85) 4.6 Boundary-Scan...... 35 4.7 Electronic load...... 44

5 PROBLEM DEFINITION...... 47

5.1 Overview...... 47 5.2 Defined areas...... 47 5.3 Delimitations...... 49

6 THE DESIGN PROCESS...... 51

6.1 Solution mapping...... 51 6.2 Solution coverage...... 51 6.3 Early stage user manual...... 52 6.4 Test System specifications ...... 52 6.5 Test execution framework, TEX...... 54 6.6 External port logger...... 72 6.7 Boundary-scan...... 73 6.8 Electronic Load...... 73

7 TEX IMPLEMENTATION...... 78

7.1 Test execution framework (TEX) implementation...... 78

8 BOUNDARY-SCAN PRODUCT EVALUATION...... 88

8.1 General functional overview...... 88

9 RESULT ...... 90

9.1 Approach goals...... 90 9.2 Solution package evaluation...... 90

10 FURTHER WORK...... 92

10.1 Test execution frameworks implementation...... 92 10.2 Boundary-scan product implementation...... 92 10.3 Electronic load implementation...... 92 10.4 Routines...... 92

11 REFERENCES...... 93

APPENDIX 1: INTERVIEWS, BRIEF......

10 (85) APPENDIX 2: ADVICE FOR DECISION......

APPENDIX 3: LIST OF TESTING ISSUES......

APPENDIX 4: PRIORITY PLAN......

APPENDIX 5: SOLUTIONS MATRIX......

APPENDIX 6: RMA STATISTICS......

APPENDIX 7: RISK ASSESMENT......

APPENDIX 8: XJTAG PRODUCT EVALUATION......

APPENDIX 9: TOPJTAG PRODUCT EVALUATION......

APPENDIX 10: DESIGN EVALUATION MEETING ......

APPENDIX 11: C++ CODE......

APPENDIX 12: USER MANUAL......

APPENDIX 13: SOFTWARE USED ......

APPENDIX 14: TEST SYSTEM SPECIFICATION......

11 (85) 1 INTRODUCTION

This chapter gives a short background to Data Respons AB, the initially defined problem and the result of the solution. The chapter also describes the work of the master thesis in short and what methods were used to achieve the goals. A short explanation of the disposition is also given. 1.1 Background Data Respons wants to improve the quality of their products by creating better tests and methods for testing. All products that are shipped to a customer must be tested. The quality requirements are very high. This final hardware test and testing of the product's functions is done in-house and is often time consuming. The products that need to be tested are often very different from each other. This makes it hard to re-use testing equipment and procedures, thus the wheel is being re- invented every time a new product is tested for the first time. This makes development and assembly expensive. Also, since many tests are done more or less manually, human mistakes occur.

1.1.1 Data Respons Data Respons is a Norwegian company that was founded in 1986. Data Respons develop and deliver customised embedded solutions to leading OEM companies. Data Respons has offices in Europe and Asia. The different offices have different areas of competence. This master thesis is done at the offices in Sweden, with main focus on Kista, Stockholm. The Kista office has both development and final assembly.

1.1.2 Problem The developers lack routines for verification and documentation. The testing equipment that developers use now is either typically a very generic equipment, such as a oscilloscope or a very specific, such as a Ethernet analyser. This can be condensed to three distinct problems • Test takes too much time. Both in the development and assembly phase, mainly because of high human interaction, lack of support-documents, templates regarding test procedures and documentation. • There are doubts about the relevance, quality and repeatability of many tests. • It is difficult to find results and statistics from old tests and detailed information about how the tests were carried out. 1.2 General goals • Analyse the different needs for improvements regarding testing at the offices in Sweden. • Speeding up repetitive tests of multiple units. • Minimise errors caused by human interaction.

12 (85) • Create methods and documentation to make tests traceable and repeatable. • Automate and minimise human interaction in test procedures. • Improve test methods. 1.3 Purpose The purpose of this master thesis is to analyse how to make hardware tests of embedded systems faster, more accurate, easier to reproduce and quality assured. The purpose is also to implement a solution to these needs. This will result in better documentation of tests, better procedures for testing in all parts of the product life cycle, better tools for product quality analysis, which will result in higher product quality. 1.4 Disposition When looking at the disposition of the report, it can be noted that the problem definition does not appear until chapter 5. It reflects the high level of work done on identifying the task. The definition could not be done earlier in the process on a level where it could be used to set any specific requirements to the solution. 1.5 References Since a lot of material was gathered during the analysis phase, many of the conclusions in the report refer to the conditions at Data Respons and are not general assumptions from other research. This is done intentionally to illuminate how the conclusions from this master thesis can be analysed and compared to other studies, best practices and literature from the area. 1.6 Priority Priorities were made for all tasks. The two main products of this master thesis, the test software and hardware were chopped up in smaller tasks. All of these were prioritised on a tree grade scale. The test software had higher priority which resulted in the hardware not being finalized. See APPENDIX 4: PRIORITY PLAN. 1.7 Method

1.7.1 The V-model The development was done using the V-model. The V-model is a well known development model, initially intended for . It says among other things that all the requirements need to be testable. See Illustration 1: V-model.

13 (85) Illustration 1: V-model

A modified V-model was used to fit the project. The model was expanded to include general goals, a target area evaluation and three individual tracks for test improvement. See Illustration 2: Specific V-model.

1.7.2 The modified V-model

Analysis and General improvement Interviews evaluation

System requirements testing

Time Planning Delivered priorities subsystem

Electronic load Electronic load evaluation

Boundary Scan Boundary Scan evaluation

Test execution framework Test execution framework testing

C++ code and Product testing

Illustration 2: Specific V-model

14 (85) 1.7.3 Improvement area analysis When the work started at Data Respons, an analysis of what type of test related improvements, was needed. This was done by making interviews at Data Respons and analysing documentation regarding testing and quality assurance available at Data Respons.

1.7.4 Decision on approach Two main tracks of improvement were identified. Top down approach covered the routines, support documentation and communication. Bottom up approach covered known problems regarding practical issues.

An advice for decision was created where the two approaches were analysed. The advice for decision was used to provide Data Respons with the information needed to decide which approach to choose. The approach chosen by Data Respons solution manager was the bottom up approach. This was also what was recommended by the authors of this master thesis.

1.7.5 Research on subject A research on embedded systems testing was done to get the information needed to properly identify the problems and find suitable solutions. The in depth studies Boundary-scan and Electronic load were also covered here.

1.7.5.1 Boundary scan in depth study The boundary-scan testing was first mentioned during the interviews with the developers at the Örebro office. It was presented as an expensive method that was complex to use but after a short pre-study on the subject and tools available today, it appeared as highly interesting. It was identified as a possible solution to one of the problems regarding board-bring-up and also a suitable target for an in depth study, see “APPENDIX 3: LIST OF TESTING ISSUES“.

1.7.5.2 Electronic load in depth study The procedure of testing power circuits was during the interviews identified as a time consuming task, see chapter 11.1 “Sammanfattning av intervjuer av utvecklare och montörer på Data Respons“. As of now, power circuits are either not tested at all, tested using the real actuator or by using a power resistor. Since very few different devices are specified to draw the same current at the same voltage, the power resistors are hard to re-use. It also involves calculations, and construction of a chassis to hold the power resistors. A digital electronic load, which is adjustable in both voltage and current, could be a solution to this problem. It will be investigated how they work, how to build one and recommendations on existing products.

1.7.6 Analysis of the refined task A second analysis phase was initiated after the decision to perform the work bottom up. This analysis involved discussions with developers and the engineers performing functional tests at the end of the assembly line. A problem definition was made.

15 (85) 1.7.7 Solution package design A solutions matrix ( APPENDIX 5: SOLUTIONS MATRIX) was created to identify suitable solutions with good problem solving coverage. A system specification was made and flowcharts were made. A early stage user manual was created to get a clear view of the solution. The three identified solutions were an automated test execution framework (TEX), a Boundary-scan solution and an Electronic load solution. The concept was tested by running the software (TEX) on an embedded system. The solution package consist of: TEX, boundary-scan and electronic load.

1.7.8 Test execution framework Design An evaluation was made to decide which programming language should be used considering system requirements and cross platform functionality. C++ with the Qt library was chosen because C++ is a well proven language and is familiar to the authors of this thesis. The Qt library provides a convenient way to cross compile to both systems and systems, using the same .

1.7.9 Boundary-scan testing The boundary-scan solution was tested by installing two products and testing the functionality on a embedded system developed by Data Respons.

1.7.10 Electronic load testing The electronic load solution was not tested explicitly due to lack of time. Earlier experiences with electronic loads show that they are easy to use and give repeatable results

1.7.11 Test execution framework evaluation The test execution framework was evaluated by presenting the solution to the developers at Data Respons. The evaluation was also done continuously with design meetings.

1.7.12 Boundary-Scan evaluation The Boundary-scan evaluation was done as product evaluation of available boundary-scan products. The products was tested by creating boundary-scan tests on an embedded system developed by Data Respons.

1.7.13 Electronic load evaluation An evaluation of an electronic load could not be performed. Since electronic loads are fairly expensive, it was thought likely that one would not be purchased during this master thesis. The plan was instead to build a very simple electronic load, a prototype, as a proof of concept. The prototype was not built due to lack of time.

16 (85) 1.7.14 General improvements evaluation The general improvements were evaluated by checking the requirements from the general goals. Because of time limits, the final solution was never practically implemented or evaluated in a case study.

17 (85) 2 PROBLEM ANALYSIS This chapter describes how Data Respons work with tests. The analysis consists of interviews with Data Respons personel and file server documentation analysis of routines and wikis.

When the work started at Data Respons, it was not clear what needed to be done to best solve the problems regarding tests. At first, the task given was to build a “test box” that somehow made tests of embedded systems easier. Due to the unclear purpose of this box, an analysis of the general needs regarding tests for all the offices in Sweden was done. This analysis included both the need for better documentation and the need for new or improved tools to execute the tests.

2.1 Business analysis

2.1.1 Interviews The analysis consisted of a series of interviews. The interviews were held at Kista, Lund, Västerås and Örebro, covering 18 persons, mainly developers and integration engineers but also project managers and sales personnel. The interviews contained, among other things, questions regarding the needs for improvement on a both technical and administrative level, how tests and test documentation is handled and stored today. This provided a good overview and played a central role in deciding what was needed and what could be done. The analysis generated a report, Advice for decision.

2.1.2 Support document analysis The Support document system is relevant because the documentation of products and their tests are important to the quality of the products. Data Respons has a Quality Management System (QMS). This QMS complies with ISO 9001. The QMS has three levels: • Level 1: Strategic Management process, Organization charts, Quality management process, Description of the company and its goals in short. • Level 2: All procedures Flow charts for specific procedures like Order handling, Data backup, Document control, Sales procedure, etc. • Level 3: Specific instructions, forms, schemes, etc. Coding standard for C and C++, ESD (Electro Static Discharge) policy, Calibration rules, Hardware and Software development instructions (settings in development software, symbols to use etc.), General Development instructions (time plan, milestones, work-flow, etc.) 2.2 Summary interviews This is a summary, office by office, of about 18 hours of interviews. Observations regarding the development process have also been made every day during the entire length of the master

18 (85) thesis. This information and knowledge is here condensed to a summary of what is characteristic for each office and a description of how the business is run. Although the employees are tied to their home offices, the projects are often shared between offices. It is quite common for developers and project managers to visit other offices and work there for short periods.

2.2.1 Stockholm Kista At the Kista office, two HW developers, the Operations manager, one Integration engineer and one salesman were interviewed.

2.2.1.1 Office description The Kista office has 30 employees, of which three are developers, eight are System Integration engineers or technicians and the rest belong to Administration and Sales. Solutions department: • Does in-house development projects. The projects can be of very different sizes. Small projects can be minor upgrades of existing products or previous projects. Large projects can be development of a completely new product about which the customer only has an idea but no competence to develop on their own. • The most common type of project is custom built embedded systems. • The environmental requirements are often high. The products need to operate in environments that are for example damp, rapidly changing in temperature, vibrating and dusty. Operations department: Operations has several sub-departments, among them System Integration and RMA. RMA department: • RMA Handles products that are returned to Data Respons for upgrades or repairs, both off-the-shelf products and OEM products. System Integration department: • Assembles prototypes and pre-series developed by Solutions. • Assembles final series of products that are complicated to build, and are not produced in big enough volumes to be profitable to assemble abroad. • Repairs many of the products that Data Respons sells. • Performs functional tests on products that are assembled abroad, before they are shipped to customers.

2.2.1.2 Summary of what was said during the interviews • A general opinion was that there is a lack of guide documents on how to run projects and how to create and perform tests. • Feedback to developers about product problems in the field and during assembly is poor.

19 (85) • Statistics over common faults and problems require some work to extract since almost all information from tests are in spreadsheets. • Things are mostly working fairly well, but that is because most projects only involves a handful of people at a time. • Many things are done ad-hoc and not always well documented which means that if the engineer who knows all about a specific product leaves the company, a lot of important information is lost. • There is a need for better test specifications. • Less human interaction in tests would be an improvement, to avoid careless mistakes. • Some system for reusing old tests would speed up the process of creating new tests. • Better communication between developers and System Integration is needed. • Administrative improvement probably gives most benefits for Solutions. • Practical improvement probably gives most benefits for System integration. • Some products are unnecessarily complicated to assemble and disassemble. • A company-wide test strategy would do a lot to increase quality and efficiency.

2.2.1.3 Conclusions from the office The Solutions department would benefit from a way of testing products during development that automatically logged the tests. They would also benefit from some sort of modular testing library so they wouldn't have to “re-invent the wheel” for every project. Operations would benefit from a way of doing repetitive tests faster and more accurately. They could also use a tool for systematic troubleshooting. It would also save time if Operations could re-use the tests already designed by Solutions.

2.2.2 Västerås Two developers were interviewed. One SW and one HW developer.

2.2.2.1 Office description Västerås has about 15 employees. The office in Västerås only has Services which means that all consultants work at the customers' sites.

2.2.2.2 Summary of what was said during the interviews • About half of the times, there are good requirements to follow when testing. • Testing is one of the last things in a project, so there is seldom enough time to perform proper testing when the deadline is drawing near. • There is almost always a lack of guide documents regarding how projects and specifically tests are supposed to be done. This is probably because creating those documents is an overhead cost that no one wants to take.

20 (85) • Clear and testable system requirements would make administrative work easier concerning test. • It is important to get help from people who know the product inside out. For example, a nurse knows the medical equipment very well and should be involved in prototype testing a new piece of medical hardware.

2.2.2.3 Conclusions from the office Since Västerås has no in-house developers they will probably not be the target for this master thesis.

2.2.3 Örebro At the Örebro office, four HW developers and one office manager was interviewed.

2.2.3.1 Office description There are about 15 employees in Örebro, six people were interviewed. The Örebro office has almost everyone of their developers working in-house. Many of the developers in Örebro came from the same company before they were hired by Data Respons. This is positive because they work well together and know each others' strengths and weaknesses. It can be negative because they are used to work in a certain way and that can make synchronisation with the rest of Data Respons difficult. Örebro has one developer whose main competence is in testing.

2.2.3.2 Summary of what was said during the interviews • It is easier to prioritize testing if the customer has good technical knowledge. • The customers are more willing to pay for testing if they understand why it is important. • If the product needs a safety classification, well specified and well documented tests are an absolute requirement. • If Data Respons has total responsibility it is more common that only the most essential part of testing is done. This is because when the project hours are all used, testing is still left to be done. Therefore, testing is sometimes done for “free”. • Projects are often so different from each other that it is hard to have a template of how projects should be managed. • If it is known from start that it will be a high volume product, more effort is put into finding cheap components, mounting sequence, DFT (Design For Test), etc. • It is hard to find relevant information to re-use when searching in old project folders. It is probably more time consuming to find and re-use old templates and pieces of code than to create it from scratch when starting a new project. • When starting a new project, it should be decided from start what processes and document templates should be used, to make sure all file names and documents use the same style and nomenclature. • Construction verification and production testing takes a lot of time. Especially if an error is discovered that requires changes to the hardware construction.

21 (85) • One of the HW developers estimates that he puts about 20% of his time into testing. • EMC problems can be very difficult to resolve. It is important to be thorough when constructing the PCB. It pays off later in the project. • Early troubleshooting is difficult. It is hard to know if the problem is in the HW or SW. From experience it is about 50/50 HW and SW errors. • Intermittent errors are a problem, especially if the error is difficult to recreate. • A good way of testing ports is using loopback plugs. Making the device communicate with itself tests both ingoing and outgoing ports. • Board bring-up is often so different between boards and developers that there is no general way to do it. • When testing new HW, it is important to use small pieces of code to make sure the SW is not the problem. • Tests used by developers during construction are sometimes passed on to production with little or no modification. This is seldom very good, since these tests are not made to be run fast and in series or parallel. Logging from these tests is often not suitable for production logs. • It might be good if someone always was responsible for test. • It would probably do more good if the administrative issues were resolved, than to implement some type of new testing hardware. • A test manager for Data Respons Sweden would perhaps work well. That person could pretty soon build a library of HW and SW tests and would be responsible for seeing to that testing is properly handled in all projects.

2.2.3.3 Conclusions from the office The Örebro office is similar in many ways to the Kista office. Except that they don't have production or final assembly. If some kind of hardware device is developed, they will probably not be the primary user. If an administrative solution is the result of this master thesis then Örebro will most likely benefit from it.

2.2.4 Lund At the office in Lund, five developers were interviewed.

2.2.4.1 Office description There are about 25 employees in Lund. The company Lundinova was bought by Data Respons in 2008. They kept the name Lundinova because it is well known in that region. Lund has most of its developers working in-house. They often do high-tech compact solutions and have made a number of accessories for a large mobile phone manufacturer.

2.2.4.2 Summary of what was said during the interviews • A project start-up might look like:

22 (85) • A customer comes with an idea • A rough sketch for a solution is done quickly • If the customer likes what they see, the project moves on • A brainstorming with everyone that is available • Calm thing down and get realistic solutions written down • Choose what people should be in the project, depending on expertise and availability • Scrum is a project form that involves 5 minute daily meetings (scrums) where everyone answers the questions: What have I done since the last scrum? What will I do until the next scrum? What obstacles are in my way? These meetings are good because they keep the customer's ideas and developers' ideas from diverting. • A lot of work is done ad-hoc. • The size of projects varies a lot. From making a complete product out of a vague idea to just designing a PCB from a wiring diagram without any functional responsibility. • Testing is often thoroughly discussed at the start of projects but is easily down-prioritized as the project moves along. • Tests are often chosen from experience if the customer doesn't have specific requirements. • There is access to a simple EMC/EMI chamber where the developers can run the equipment by themselves. The chamber is not certified but it speeds up development a lot to be able to run EMC/EMI tests during development to check that everything is within reasonable limits. • EMC/EMI certification of products is done by a certified test-house. • Some companies have specific test templates, that makes it easy to construct tests. • It is rather uncommon that the customer demands test logs and test documentation from the development process. • Test results are checked against the requirements in a document that specifies all values from different measurement points. Often referring to component data sheets. • It is not trivial what tests to pass on from development to production. • Mentorship is unofficially implemented, a junior developer works side by side with a senior developer for a while. • Some test templates are re-used, but that is highly individual from developer to developer. • Having HW and SW development at different sites complicates the process. There is a risk that it becomes a “us and them” feeling when problems start to surface in the projects.

23 (85) • Having tests done abroad may be cheaper, but can complicate thing because of language barriers and the fact that it takes time to get there if there is an issue that can't be resolved over telephone or video chat.

2.2.4.3 Conclusions from the office The Lund office is quite different from Kista and Örebro. When the interviews were made they were not using the same documentation system as Kista and Örebro. Lundinova has a very different product portfolio containing more compact solutions often created from scratch whereas Kista and Örebro makes more modular solutions that are more PC-like. Lund has many skilled developers and they are very creative in their solutions. As with Örebro, if some kind of hardware device is developed as a result from this master thesis, they will probably not be the primary user. If an administrative solution is the result of this master thesis then Lundinova will most likely benefit from it. 2.3 Company-wide areas of improvements • Testing needs to get a higher priority when planning the working hours for the project. • A testing group within the company would quickly accumulate knowledge and help project groups deciding what tests are relevant and necessary. • There is a need to better port tests from development to production. • Documentation of testing needs improvements. 2.4 Test's role in a greater perspective All too often, time runs out and testing, the last thing done in a project, suffers. This is a problem for the entire industry, not just Data Respons. Competition is hard and everyone wants (and needs) to push prices down in order to win a request for quotation. This phenomenon is even more obvious in consumer products. The first version of a new product almost always contains a large number of bugs because it is cheaper and easier to let the customers find bugs and it is so important to be amongst the first on the market with a new model or product. This is a necessary, but difficult and dangerous balancing act. Too incomplete and untested products will cause bad- will, but testing too thoroughly will make the product obsolete by the time it reaches the market.

24 (85) 3 TARGET AREA AND APPROACH This chapter explains the general approach on the problem, what angle the problem was engaged from. It also defines what is possible to achieve within the frames of a master thesis. In order to reach the best possible result, the identified improvements needed to be delimited. A target area needed to be identified to make the solution form a handleable package that could be processed. The pre-study showed that there were two main areas of improvement, administrative improvements and technical solutions to problems regarding tests. Therefore, two different approaches were used. The two approaches were created to get a clearer view on what the actual work would be. This was necessary in order to make a qualitative decision on which target was best suited for both Data Respons and the students. The advice for decision was also a requirement from Solutions manager Jeanette Fridberg in order to continue the work. Approaches in short: • The top-down approach focus on routines, methods and documentation which later may result in the need for a technical solution to complement the administrative solution. • The bottom-up approach focus on a generic technical solution to solve known problems regarding how tests are done today. Support documents and routines will be created mainly to complement the use of the technical solution. The definitions of problem, approach and improvements were identified in the pre-study by analysing the interview material, see “APPENDIX 1: Interviews, brief“. 3.1 Top-down approach The top-down approach was defined as finding the cause to the “test related issues” (problems) at hand and solving the underlying problems and/or finding other larger problems that can be solved with less effort. It focuses on the routines for how things are done and what resources are at hand to solve the problems. The method allows to find and solve the right problem. Effort would be put into pinpointing the problems and giving advice on how to solve them, rather than actually solving them.

3.1.1 Identified issues • There are uncertainties regarding what the most needed improvements are regarding system testing to improve product quality at Data Respons Sweden. • There are uncertainties about how tests are administered and what routines exist concerning tests. • A lot of time is consumed on recreating test-documents. • The developers use their own templates for test documentation. This makes quality assurance difficult. Documents become hard to handle and interpret by anyone but the creator. • Many of the developers and project managers don't know what templates are available. • For smaller projects, the sales person in some cases becomes project manager. This often makes documentation and tests for off-the-shelf systems inadequate. This causes

25 (85) problems if the person responsible is no longer available and there are no documents at hand. • Many of the developers from Örebro and Lund claim that better documents and guidelines regarding tests probably would improve more than getting a technical solution for the hands-on testing.

3.1.2 Approach • Find out what templates and project models are available at the different offices. • Analyse: • Several projects and look at what templates have been used. • What has gone wrong and if it could have been avoided using better tests and templates for tests. • What the most cost efficient ways are to make sure similar errors are not repeated. • Improve: • Analysed material and make improvements, starting with the most important material.

3.1.3 Delimitations • This method would not include quality-management such as routines for cross-checking and routines for how routines are created. It would concern only testing routines and documentation of test methods. • Effort would be put into pinpointing the problems and giving advice on how to solve them, not necessarily solving them.

3.1.4 Main Goals • Shortening the time spent on tests by: • creating templates for board bring-up, logging, prototype test documentation, production test documentation. • making sure the tests that are made are relevant. Looking at what goes wrong with existing products (RMA, test logs) gives a picture of what is prone to fail, thus what is important to test. • developing or investing in new test equipment. The equipment will either be bought off-the-shelf or developed in-house. • Ensuring the quality of tests by: • providing instructions for how traceable and repeatable tests should be executed and documented. • supplying tools needed to do tests with relevant and quantifiable results.

26 (85) • Making sure data from tests is stored appropriately and is easy to access by: • analysing ways of implementing automatic documentation into new test equipment. • analysing ways of making test logs searchable and easy to pull statistics from. 3.2 Bottom-up approach The Bottom-up approach was defined as dealing with the problems directly. Not looking into what caused the problems in the first place or identifying other greater problems. In this case focus on developing a solution to known problems, limitations and minimising human interaction. Only support-documents and routines regarding the solution would be produced.

3.2.1 Identified issues • Functional testing is highly manual. • Functional testing is time-consuming. • The method to carry out tests in production is very product specific. • The way of logging tests in production are highly manual. • No voltage, current or load tests are done in production. • Monotonous tasks may cause human error. • There are no methods to reuse tests between products. • There are no methods to do the exact same test again, years later.

3.2.2 Approach • Analyse what the Solutions and Operations departments in Stockholm needs regarding tests and automation. • Analyse how tests can be done more easily. • Develop a method for automated logging of voltage and current of external ports and power-feed during functional tests. • Develop a dynamic software for test automation and logging.

3.2.3 Delimitations • The end user of the solution will primarily be Operations and Solutions in Stockholm/Kista. • Issues regarding test routines at the Operations and Solutions department will not be handled • No deeper analysis of the backgrounds to the problems will be made.

27 (85) 3.2.4 Main goals • Make it possible to quickly make physical layer tests • Speeding up repetitive tests of multiple units • Minimise errors caused by monotone tasks for the tester • Make tests traceable and repeatable • Automate documentation of tests 3.3 Advice for decision To evaluate the approaches, an advice for decision was created, see “APPENDIX 2: ADVICE FOR DECISION“. The document contains a description of the approaches and an advice on which approach to take. The bottom-up solution was considered the best approach. The motivation was that Data Respons benefits directly from a automated and unified test-platform. The solution can also be developed further to create automated installations of custom operating systems and other features that would ease the work for both Operations and Solutions personnel. It should also be noted that a light version of the top-down had already been done in the pre-study. On the contrary, the need for working with the problem area covered by the top down approach was identified as highly necessary. Issues regarding support-documents and routines for both developers and project managers were found in almost all the interview material. The conclusion, however, was that this work should be done on a strategic level with respect to the the different customers' projects and ways of working at the different offices. The work should be done by someone who has long experience with managing development of embedded systems. 3.4 Decision Based on the Advice for decision report , Jeanette Fridberg, Solution Manager in Stockholm, followed the advice for decision and decided that the bottom-up approach should be used for this mater thesis.

28 (85) 4 TESTING METHODS OF EMBEDDED SYSTEMS

This chapter contains the research regarding test methods available for embedded systems. The Boundary-scan and power loading methods are explained in detail (in-depth studies), the other methods are briefly explained. Test coverage areas are defined and methods are mapped to the different areas. 4.1 Testing methods

4.1.1 Black box testing Black box testing is a test method where the functionality is verified by sending test patterns to the DUT (Device Under Test) and comparing the response to an expected response. How the DUT produces the response is not of interest. The main focus in black box testing is to verify how the DUT acts according to the expected behaviour. Black box testing is normally done during production testing.

4.1.2 White box testing The white box testing method focuses on testing how the DUT is working internally. The method ranges from analogue and digital component tests to investigating buffer levels, and data flow in a program. White box testing is normally done during product development testing.

4.1.3 Examples of different tests • In-circuit test (ICT) • Stand-alone JTAG test • Automated X-ray inspection (5DX/AXI) • Automated optical inspection (AOI) • Continuity or flying probe test • (Board) functional test (BFT/FT) • Burn-in test • Environmental stress screening (ESS) test • Highly Accelerated Life Test (HALT) • Highly accelerated stress screening (HASS) test • Ongoing reliability test (ORT) • System test (ST) • Final quality audit process (FQA) test Some of these are explained later in this chapter.

29 (85) 4.2 Areas of testing There are several test-methods used when developing embedded systems. Which type of test to use, and why, depends on DUT usage and production volume. The method also depends on which part of the product life-cycle the product is in. The tests also cover different areas of the DUT in different ways. When studying the different test methods and the hardware of the systems available at Data Respons Kista, the components and logic of a DUT were divided into eight areas, see Illustration 3: Embedded systems test areas.

Applications

OS, drivers Robustness Logic Power Signal EMC circuitry electronics Quality

Mechanics Illustration 3: Embedded systems test areas

The Applications block represent testing the applications delivered with a system. The tests are used to verify that all configuration files, licenses and binaries required to run the applications are installed and correct. Usually the application tests are used to test the lower level blocks as well. Testing an application that initiates a network connection over GPRS to the Internet includes testing both the application itself, part of the OS, the driver and the GPRS modem hardware functionality such as logic connections and signal quality. It is however very difficult to get a full test of the underlying blocks with this method. The OS and driver block represents testing that the devices and resources are available to the operating system. The difference between application level and driver level verification is not always clear. When testing a serial interface at application level, the application Hyper-terminal can be used. The equivalent OS/driver level would be to access the device directly through the device OS buffer. The Logic circuitry block represents testing the PCB IC´s physical connections between logical components on the PCB. The tests are used to verify the mounting and soldering of the logic components on the PCB. It also tests that all nets on the PCB are intact. The Power electronics block represent testing the circuitry used to provide power to the PCB and to external devices from the PCB. Testing these circuits usually involve applying an external load to the system, or running the system at its maximum performance and analyzing the quality of the provided power. Terms such as ripple, level and heat are objectives to these tests. The tests are used to verify the mounting and soldering of the power circuit connections and heat sinks on the PCB. The Signal quality block represents testing the quality of the different analogue and digital signals. The tests are used to verify the mounting and soldering of termination resistors and interference suppression capacitors on the PCB.

30 (85) The EMC block represents analysing how sensitive the DUT is to electromagnetic interference and how strong electrical signals are propagated from the DUT. The tests are used to verify that the PCB design is correct according to the EMC requirements. The Mechanics block represents testing water resistance, pressure, vibration, temperature and other mechanical entities. These tests are usually done during product development but some IP classified products require additional testing during production according to Data Respons' policy. The Robustness block represents several robustness areas. The identified robustness areas at Operations at Data Respons Kista are mechanical, electrical and logical(computational). The robustness tests analyse how well or long the system operates during non ideal conditions. Mechanical robustness tests cover physical quantities such as vibrations and heat. Electrical robustness tests cover EMI and EMC and electrical overload tests. Logical robustness tests includes applying logic load to the system such as network traffic and CPU operations.

4.3 Functional test Functional test is a test method that runs the system at native speed. Tests are run on a test-OS or on the DUT's intended OS. This is a common test method in a production environment since the DUT usually have a functional OS in this product stage and multiple systems can be tested in parallel with no extra hardware required. Functional tests can be used to test parts of both logic circuitry, power electronics, signal quality and robustness at the logical and electrical level. The test method is preferably used to test OS, drivers, applications, and robustness. See Illustration 4: Functional test areas.

Good coverage Applications OS, drivers

Partial coverage Logic Power Signal Robustness EMC circuitry electronics Quality

Poor coverage Mechanics Illustration 4: Functional test areas

The disadvantage is that functional tests usually does not completely test all the blocks. This creates a situation where the partially tested blocks often are left partially tested see, “APPENDIX 1: Interviews, brief“. It is also difficult to pinpoint the blocks that actually failed during a failed functional test. If the developers involved with creating the PCB do not have access to the DUT OS and drivers, the cost of creating functional tests is fairly high.

31 (85) 4.4 In Circuit Testing In Circuit Testing (ICT) started to appear in the 1970s and was the first step to automate the procedure of manually testing individual components separately after PCB mounting [Park1]. Here the ICT-techniques bed of nails and flying probe is discussed

4.4.1 Bed of nails Bed of nails, uses a test fixture with several nails to connect to the PCB components. This enables testing each component individually. A similar manual action would be connecting a multimeter or oscilloscope to strategic positions on the PCB. The method enables testing of resistors, capacitors, inductors and open/shorted nets.

Good coverage Applications OS, drivers

Partial coverage Logic Power Signal Robustness EMC circuitry electronics Quality

Poor coverage Mechanics Illustration 5: In circuit tests areas

The PCB is put on top of the bed of nails and pressed down, connecting the components to the nails. The ICT method can be used to simultaneously connect hundreds of strategical position and then use programming logic to connect to the nails through the main plate. The nail connections are used to test the components by apply signals, currents or voltages and measure the result. ICT is also used to produce and read logical patterns on ICs [Fein1]. See Illustration 6: Bed of nails [www.ami.ac.uk].

Illustration 6: Bed of nails [www.ami.ac.uk]

32 (85) Advantages The advantages of ICT is that every nail can be used to very accurately analyse signal quality and levels. This is very useful when testing power and radio circuits where signal quality and voltage stability is important. Compared to manual testing of components, ICT delivers a method for automatically running multiple tests and saving the result in a automated manner well suited for very large production volumes [Fein1]. Disadvantages Designing ICT tests is time consuming since the bed of nails test fixture has to be specifically designed for each new PCB version. During PCB design it is also important to design for ICT. Many of the strategical points are not necessarily reachable for the nails. Therefore test-nets need to be created and connected to pads on the surface of the PCB in order to test the unreachable nets. This is a time-consuming effort. As PCBs become smaller and more complex and as ICs tend to be surface mounted in such ways that the pins are not accessible (BGA/PGA), the use of ICT has decreased and will decrease further over time [GOEP1].

4.4.2 Flying probe Flying probe is much similar to the bed of nails testing method. The difference is that instead of a large numbers of fixed nails, a couple of flying probes (nails) are used, the flying probes are mounted on a robot arm. Lately, flying probe tests systems are combined with capability of in- circuit Boundary-Scan capabilities. This means that no test access port (TAP) is needed. The probes connect directly to the IC pins. This is a very interesting combination with many advantages. The tests coverage is greatly increased since the logic of the ICs now can be tested and built in self tests can be executed. Also the time to cover a full test on a large PCB is greatly reduced since a large portion of the test can be done with Boundary-scan test logic [Spea1]. Advantages This solution makes it easier to create new tests as the bed of nails fixture is no longer needed. The probes are positioned using robot arms. A piece of test software can automatically identify test points from BOM (Bill Of Material) lists, net-lists and PCB layout schematics. Just like bed of nails testing, the flying probe tests can be used to check mounting of components, soldering, and component values such as resistance and inductance. The positioning of the probes is precise which enables testing of small ICs such as small outline packages where there is room for only one nail at a time [Spea1]. Disadvantages The initial investment costs for flying probe tests systems are very high. To defend the investment a large amount of different, medium volume, high end PCBs needs to be tested. Low cost complex high volume PCBs such as low end PC main boards is not worth testing since the flying probe tests becomes too slow or does not cover the whole PCB. In this case a bed of nails is a better alternative [GOEP1]. 4.5 X-ray X-ray is a method to examine the circuit board, discrete components and the internals and connections of integrated circuits. X-ray testing, or examination, covers the hardware of the electronics see Illustration 7 “X-ray test areas“. It is usually done while the DUT is not powered on. X-ray is used both as a mass production test for products with demands on very high

33 (85) reliability. It can also be used to find out why a specific product has broken or doesn't work as intended.

Illustration 7: X-ray test areas

The X-ray machines are high resolution specially designed to take X-ray images of electronics. To find defects inside the PCB (Printed Circuit Board) the X-ray machine need a resolution of around 20 μm [Doi1]. Taking an X-ray picture of a PCB is problematic. Many of the atoms in the materials (solder, glass fiber, silicon, copper) on a PCB with components mounted are heavy and absorb and scatter X-rays at different rates. This makes the image low contrast and distorted and requires extensive image processing. [Doi1]. So, it is not possible to just “take a photo” of the PCB using X-ray. Using X-ray for trouble shooting When manufacturing electronics there is always a risk for defects in both the PCB and in the soldering of the components on the PCB. The defects can be explicit, the product doesn't function at all. The defects can also be latent, the defect might produce an error in function after some time. One problem with very compact soldering techniques like BGA (Ball Gate Array) is that the temperature coefficients for material expansion is different between the components and the PCB they are soldered to. Conventional surface mounted devices have tiny metal legs that connect the component to the PCB. When temperatures vary, the materials in the PCB and the components expand differently. This mechanical stress is handled by the legs of the component since metal is flexible, see Illustration 8 “conventional SMD IC, BGA IC”. A BGA mounted device is on the other hand almost rigidly mounted to the PCB which puts a lot of strain on the solder balls, causing them to crack. The defects can occur in different parts of the product. • Non-wet solder joints • 3D defects like tunnels in the PCB • Cracking of solder-balls in BGAs • De-bonding inside the IC package (Integrated Circuit) When the IC is X-rayed, see Illustration 9 “X-ray image of an IC with a Ball Gate Array connection“, the image is processed and the solder balls are examined by a computer program that compares the roundness of the balls [QUAN1], the size of the balls and the distance between them. Modern 2D X-ray systems can find about 96% of the defective solder balls [Said1]. Advantages • Possible to examine solid components • Possible to examine soldering techniques like BGA, which are covered by the IC. • Non-destructive examination

34 (85) • Computer power and image processing have made great advances, which makes the X- ray method cheaper and more easy to use. Disadvantages • X-ray is an ionizing radiation which is unhealthy to the test operator in too large doses. X-rays can also degrade materials in the products. • Expensive equipment. • Only physical hardware examination Conclusions Due to the high costs of performing an X-ray scan of a board or component, it is often done when trying to find a cause for error. For high volume functional testing, methods like Boundary-Scan is much cheaper. But to be able to determine how the connections look under a BGA-IC and to examine delamination and the inside of PCBs, X-ray is a good alternative.

Illustration 8: conventional SMD IC, BGA IC

Illustration 9: X-ray image of an IC with a Ball Gate Array connection

35 (85) 4.6 Boundary-Scan Boundary-Scan (BS) is an in system testing method used to test the connection nets, the soldering and mounting of devices on the PCB and the device functionality. This is done by using BS compatible ICs on the PCB to test the PCB. It is a PCB self-test mechanism and no extra testing equipment is needed except a test access port (TAP) on the PCB, testing equipment and testing software. The tests are executed and controlled via a separate test network on the PCB often referred to as the JTAG chain, which is the data bus connecting the BS compatible devices on the PCB to the TAP. The JTAG chain is also commonly used to program the devices on the PCB. The basic operations used in the BS state-machine is to set different states on the pins of the BS compatible IC and then analyse the result of the pin change. This will be described in detail in chapter 4.6.1 Boundary-scan circuitry. There are different levels of tests that can be performed, depending on what the BS compatible IC and tests equipment supports. This is described in chapter 4.6.5 The Boundary-Scan Standards. To better understand what the different standards add to the test functionality and what other functions can be added to achieve better tests, boundary-scan test product evaluation was made with the result presented in chapter 8 BOUNDARY-SCAN PRODUCT EVALUATION. As can be seen in the test area coverage (see Illustration 10: Boundary-scan test areas) the areas covered are only logic circuitry and parts of the power electronics but these nets are usually represent a very large portion of the PCB. The logical circuitry is also more difficult to test. The test is also very isolated since there are no OS or application dependencies. This results in very accurate fault detection and error identification of the PCB hardware.

Good coverage Applications OS, drivers

Partial coverage Logic Power Signal Robustness EMC circuitry electronics Quality

Poor coverage Mechanics Illustration 10: Boundary-scan test areas

4.6.1 Boundary-scan circuitry Boundary-scan (BS) is a set of design-rules used to implement a test method to test the ICs and connections to the ICs. The testing standard is mostly known as IEEE /ANSI 1149.1. When implemented, it enables testing of the IC itself and the ability to read and set levels on the pins of the IC to test the devices and nets connected to the IC. The BS logic takes control over the IC main logic and implements the states of the pins of the IC that the test requires, see Illustration 11: BS enabled IC. The fundamental functions are setting the pins as output(high, low,pull up) or input. When multiple BS devices are connected on the PCB, they can simultaneously be accessed through the JTAG chain and pins pulled high on one of the ICs can be confirmed high on the other, assuming they are in some way connected. This enables testing of poorly soldered or shorted ICs and nets on the PCB. The BS circuitry in the IC is separated from the IC's main logic and can run tests without the main logic activated which makes it possible to tests the PCB without any OS, drivers or software.

36 (85) IC

I/O pin I/O pin I/O pin I/O pin I/O pin IC I/O pin I/O pin Main logic I/O pin I/O pin I/O pin I/O pin I/O pin

Boundary- TDO TDI scan logic

TCK TMS Illustration 11: BS enabled IC

4.6.1.1 Test Access Port, TAP The four pins required to use the boundary-scan are: Test Clock(TCK), Test Mode Selector(TMS), Test Data In (TDI) and Test Data Output(TDO), see Illustration 12 “Listen mode“. With the clock and test mode selector different states can be reached for which the test data input and output is handled differently. The state-machine that provide this logic is called the TAP-controller and will be described later in chapter 4.6.3 “The state machine“. IC

I/O pin I/O pin I/O pin I/O pin I/O pin IC I/O pin I/O pin Main logic I/O pin I/O pin I/O pin I/O pin I/O pin

Boundary- TDO TDI scan logic

TCK TMS Illustration 12: Listen mode

37 (85) 4.6.1.2 Three pin modes BS uses three different modes to operate. The modes are dependent of how the multiplexers are set. The multiplexers are the three state switches used to connect the BS logic to the IC pins and main logic. Non-Invasive (listening) This mode allows the Boundary-Scan TAP to be used without disturbing the normal operation of a chip. Functions such as bypass, idcode, sample, and usercode can be used to run tests on the DUT when running an operating system. This corresponds to the chapter 4.6.2.4 Real time functionality (states and logic) . • Pin mode 1: Listening mode (Non-Invasive), the BS logic can be used to read the states of the pins when the IC Main logic is in running mode. Pin-permisssion (controlling) This mode separates the PCB pins from the IC's core logic and connects the BS multiplexers to either the PCB or IC core. This corresponds to the modes Net connectivity testing on PCB, Main logic testing of BS compatible ICs and Passive and logical devices on PCB. • Pin mode 2: External tests (pin permission), the I/O pins can be disconnected from the main logic allowing the boundary-scan logic to take control of the I/O pins connected to the PCB. • Pin mode 3: Internal tests (pin permission), the boundary-scan logic can also connect to the main logic, disconnecting the I/O pins from the PCB.

4.6.2 The four test areas These three modes are used to test four test-areas of the PCB. The test areas are: • Net connectivity testing on PCB • Main logic of BS compatible ICs (built in self test) • Passive and logical devices/components on the PCB • Real time functionality (states and logic) normally running on the PCB. The test modes are used to interact with different parts of the PCB and the Listening mode is used to scan the states of the pins without interfering with the main logic.

4.6.2.1 Net connectivity testing on PCB At the PCB level, BS is used to test the connections to the IC. The main logic is disconnected. See 13 “External connection test“. Defects such as stuck at high, stuck at low, broken nets and shorted nets can be found by manually changing the states of the I/O of the IC and reading the result. The method for testing nets is described in practice in “APPENDIX 9: TOPJTAG PRODUCT EVALUATION“ There are different ways of testing nets connected to a boundary-scan enabled device. Depending on how the nets are connected to the other devices on the PCB the nets can be set to high, low and resistive high (pull up high). By trying to set the pins on the BS device in a state and then reading the state on another IC, the change made can be verified. If a net is connected to two ICs

38 (85) with boundary-scan support, the net can be set to high or low on one of the ICs and read/measured by the other IC. This test can be used to test that there is a connection between the ICs, that the net is not stuck at high, e.g. short circuit to VCC or stuck at low, e.g. short circuit to GND.

IC

IC Main logic

Boundary- TDO TDI scan logic

TCK TMS Illustration 13: External connection test

39 (85) 4.6.2.2 Main logic testing of BS compatible ICs The main logic testing of BS compatible ICs is a set of extra functions that can be built in to the IC. The Boundary-Scan protocol is used to start internal tests by sending vendor specific instructions to the IC's boundary-scan interface . The result is then read the same way as reading the states of the external pins. The PCB connections to the main logic is disconnected and the BS has full control over the main logic, see Illustration 14 “In circuit test“. IC manufacturers can implement their own built in self tests (BIST). There are no predefined rules for how the BIST tests should be executed and what test should be available. The method used to execute the tests is up to the IC vendor to specify or supply. [Park2]

IC

IC Main logic

Boundary- TDO TDI scan logic

TCK TMS

Illustration 14: In circuit test

40 (85) 4.6.2.3 Passive and logical devices on PCB The test is executed by addressing logical devices (such as RAM) connected to the boundary- scan chain in such a way that the logic can be tested. All logical devices that are connected to a BS compatible device can be tested through the BS- compatible device by applying the right logic to the logical device by controlling the data bus to the logical device through the BS pin permission control. The main logic functionality is emulated with the BS logic to access and test memory and other logical devices. See Illustration 15 “External logic test“. By setting pin states correctly tests such as memory functionality can be done. For example, the memory device ID can be polled by writing the logic address to the RES- Byte (Read Electronic Signature) on the memory bus and comparing the response with the actual device ID. This is covered more in practice in “APPENDIX 8: XJTAG PRODUCT EVALUATION“.

IC

Logic IC Main logic

Boundary- TDO TDI scan logic

TCK TMS

Illustration 15: External logic test

41 (85) 4.6.2.4 Real time functionality (states and logic) In the non invasive mode, the BS logic is used as a pin-sniffer, observing the states of the pins while the IC main core is operative and the PCB logic is running its main application. The mode is very convenient to use since the state of a pin can be read on the IC and compared to what state it should be, according to the specification. The mode can also be used to verify that clocks and data buses are running since the state of these buses and nets are fluctuating. It is important to note that the frequency of the clock or data bus usually is much higher than the read frequency of the BS chain so the logic signal cannot usually be read. How this method can be used is covered in “APPENDIX 9: TOPJTAG PRODUCT EVALUATION“

IC

I/O pin I/O pin I/O pin I/O pin I/O pin IC I/O pin I/O pin Main logic I/O pin I/O pin I/O pin I/O pin I/O pin

Boundary- TDO TDI scan logic

TCK TMS Illustration 16: Listen mode

42 (85) 4.6.3 The state machine Boundary-scan uses a state machine (TAP-controller) to control the functionalities of the standard. The state-machine reads the level of the TMS when the state of the TCK changes. Depending on which state TMS is in, the machine decides which the next state will be. As an example, holding the TMS high during 5 TCK cycles, changes the state back to the starting point, which is TMS logic reset. This is also used as a state machine initial sync in order to be sure that the initial state is the starting point. It is also made with security in mind since the TMS net should have a pull-up resistor attached to it according to TAP design rules [XJTA1]. If the TMS is pulled low for some reason , the state is changed to test run idle, and will not proceed to the next state until it is pulled high. This is to prevent any accidental boundary-scan activities when the system is running. The functions in the state machine follow two tracks where the first is the data register DR track and the second is the instructions register DR track. The instruction register is used to select what should be done and the data register is used to read or set the instruction selected by the instruction register [Park2].

Illustration 17: BS State machine

43 (85) 4.6.4 The Boundary-Scan Description Language The Boundary-scan description language (BSDL) is a VHDL based language used to specify the functionality of a BS compatible device. If a vendor claims BS compatibility on a specific IC, the vendor must provide a boundary-scan description file. The file is written in the boundary- scan description language and has a “.bsdl” file descriptor. The BSDL file contains information on how tests can be run on the device. The characteristic of a BS compatible device can be how long the registers are, how data is shifted and what the maximal TCK clock speed is supported by the device. The file is written to be interpreted by BS test software but can also, to some extent be interpreted by reading the file in a text editor. The file is split in to sections, describing different test features such as, physical pin positioning, pin names and test modes, pin maps, IEEE standard compatibility, data register description, etc. When working with boundary-scan tests, it is not necessary to understand the BSDL language. This knowledge is more important to IC designers implementing the standard or software developers creating boundary-scan test software.

4.6.5 The Boundary-Scan Standards There are several IEEE standards used when executing a boundary-scan(BS) test. The most commonly used is the IEEE1149.1 which is the first Boundary-scan standard.

4.6.5.1 IEEE1149.1 (1990, 1994, 2001) The IEEE 1149.1 is a standard specifying signals and the state-machine of a service network used to test compatible devices and connected nets on the PCB. The main functionality is explained in chapter 4.6.1 Boundary-scan circuitry. By setting pins in different states and reading pin states, tests can be executed.

4.6.5.2 IEEE1149.4 (1999) The IEEE1149.4 standard adds to the IEEE1149.1 standard, the ability to receive analog data such as voltage levels on specified pins by adding AD signals to the IEEE1149.1 standard. The standard is only available in a limited number of devices. This is unfortunate, since voltage level measuring adds the ability to effectively test power-components if proper DFT test-pins are plugged in to the power-circuit[Park1].

4.6.5.3 IEEE1149.6 (2003) The IEEE1149.6 standard adds over the IEEE1149.1 standard, the ability to test AC-networks and differential interconnections by adding AC “pulse” and “train” signals to the IEEE1149.1 standard [Gope1].

4.6.5.4 IEEE1532 (2002) Is an extension to IEEE-1149.1 mostly known as the In System Programming (ISP) hardware standard used to program JTAG devices[Alte1]. It is a common standard, used by and compatible with many devices. The in system programming (ISP) adds the ability to program and flash the BS compatible IC. If multiple firmware/flash upgradeable devices are connected to the JTAG chain, they can all be accessed and programmed through the TAP.

44 (85) 4.6.5.5 IEEE P1581 (2011) The standard aims to complement the IEEE1149.1 with a device standard that can communicate through a IEEE1149.1 device. It adds test functionality to logical devices by implementing ways to address the bus-devices and to to run self in a specified device test-mode. No extra connectors are required [IEEE1] (except FLASH [Gope1]), the tests are reached through a test-switch- address on the device. Used to test DDR DDR2 and FLASH, etc.

4.6.5.6 IEEE1149.7 (2009) The IEEE 1149.7 Standard for Reduced-Pin and Enhanced-Functionality Test Access Port and Boundary-Scan Architecture. The standard is backward compatible with the IEEE1149.1 except for the additional two wire interface mode. The standard defines six sub standards T0-T5 which is a grade of supported functionality with T5 defined as full IEEE1149.7 support. Features such as debug, increased scan performance, hot-connection, star TAP topology, two or four pin TAP configuration, and the ability to flash and program in parallel. [IEEE2] Except for the IEEE1149.7 standard there are several other new standards growing in theory but none of them are well established on the market.

4.7 Electronic load An electronic load is a device that act like a dummy load to a power circuit. It pretends to be the real equipment, thus testing the power circuit that is providing electric power.

4.7.1 Power circuits A power circuit is a circuit in electronic equipment that produces more than just a signal. Some examples are motor drivers, sound amplifiers, speaker drivers and voltage converters. They can be very simple, like converting AC to DC without any intelligence, they can also be very precisely controlled. A simple analogue power circuit could be a transformer with a rectifier, converting 230 V AC to 12 V DC. An advanced power circuit could be a motor driver directly feeding an asynchronous induction motor from 3-phase mains.

4.7.2 Why test power circuits Reasons to test power circuits in embedded systems are: • Power circuits often get hot while running. Sufficient cooling is important to prevent overheating. Overheating can damage the circuit or it can shut itself down to prevent permanent damage. • By testing the power circuit long enough for the temperature to reach steady-state, the cooling is tested as well. • Broken power circuits can cause shortcuts in the device. In the worst case, the heat from a malfunctioning power circuit could cause fire and/or melt cable casing thus making the enclosure conductive. • Should a power circuit fail while working at peak power in for example a sky lift the lift would come to an abrupt stop, which could be dangerous.

45 (85) 4.7.3 How test power circuits Electric power can have lots of different characteristics. Testing DC is quite different from testing AC. AC involves phase shifts, reactive power and overtones. DC can be pulse modulated and contain very fast transients. This study will only concern DC testing. There are basically three ways to test power circuits, each with it's pros and cons.

4.7.3.1 Using the real actuator One way is to use the actual equipment that is supposed to be powered by the power circuit. Benefits: • The power circuit is tested under real load conditions. • Less uncertainties of how well the load conditions are simulated compared to other load methods. Drawbacks: • Not possible to test the power circuit outside the performance of the real actuator. • The real actuator might still be under development. • The real actuator might be unreasonably expensive, loud, dangerous or in another way impractical to use. • It might be difficult to measure and log characteristics from the actuator and the power circuit.

4.7.3.2 Passive dummy load A second way to test power circuits is to use a passive dummy load to the circuit. The real actuator is modeled by a power resistor and, if needed, capacitors and coils. Benefits: • Inexpensive • Fairly easy to construct • Easy to measure • Simple load case Drawbacks: • Inflexible • Only very simple load cases can be modeled. • Power is just converted to heat.

4.7.3.3 Active dummy load A third way of loading power circuits are active loads. They are often called e-loads or electronic loads. Electronic power components are getting cheaper and better every day and e-loads are becoming more and more common. Benefits:

46 (85) • Can be made either simple or complex • Wide range in both power and resolution. • If testing consumes large amounts of electrical energy. It is possible to feed the electricity back to the power grid. • Many ways to configure the load. Drawbacks: • More expensive than passive dummy loads • More complex test equipment adds a point of failure • Complex equipment requires education

4.7.4 Reasons to use an electronic load The biggest advantage of using an electronic load is that is is flexible. Even the more simple ones can simulate different load cases with good accuracy. They can simulate fast transients or slow changes in load. They are easy to use and small enough to fit on a desk. If testing requires a lot of current to be drawn from the main grid. Some E-loads are able to re-feed the electric power to the grid instead of just converting it to heat. Almost all electronic loads have digital displays that show how much current is being drawn and at what voltage. More advanced E-loads can also send logging data to a PC or create a log file on a removable media. Some advanced E-loads can be programmed to run a load sequence, which can be repeated to mimic real load cases.

47 (85) 5 PROBLEM DEFINITION This chapter contains a detailed description of the analysed problems regarding tests at Data Respons and the process of pinpointing these problems. • Testing takes too much time. Both during development and assembly, mainly because of high human interaction and lack of support documents and templates regarding test procedures and documentation. • There are doubts about the relevance, quality and repeatability of many tests. • It is difficult to find results from old tests and detailed information about how the tests were carried out. • It is difficult to isolate the faults during product development The problem definition is the result of the task “Analyse what the Solutions and Operations department in Kista need regarding testing and automation” according to the Bottom-up approach, see “APPENDIX 2: ADVICE FOR DECISION“. 5.1 Overview All products that are shipped to a customer must be tested. The quality requirements are very high. The final hardware tests and testing of the product's functions is done in-house and is often time consuming. The products that need to be tested are often very different from each other. This makes it hard to re-use testing equipment and procedures. This makes development and assembly expensive. Production testing is done more or less manually which takes a lot of time and makes room for human errors. The developers also lack tools and routines for verification and documentation. This makes reading and understanding other developers' documentation complicated. 5.2 Defined areas The problem definition was based on the material gathered in the pre study. The interview material and the documents available on the SVN and QMS were analysed a second time with aspect to the bottom-up approach. The results from this analysis can be found in chapter 6.5.1 “Requirements“.

5.2.1 Test compatibility The need for automation of both tests and logging of tests was identified in all interviews with the Operations personnel. One problem with the automation is that the products tested are very different from each other. In order to automate in a general manner, this issue needs to be addressed. A DUT range from normal PC performance with GB size RAM, GHz speed CPU and multi GB storage to small embedded Linux systems with ARM9 processors, 256 MB memory and 100 MB available storage. The automated solution needs to be able to run tests on both of these extremes.

48 (85) 5.2.2 Previous and current devices The systems tested at Data Respons do not follow any clear patterns regarding I/O. Some systems have multiple optical mega-bit network interfaces, some have built in power-electronics for motor control and some only have simple serial interfaces like RS232. Tests like visual inspections cannot be fully automated. To identify the most commonly used ports, the common ports list presented in the interviews in “APPENDIX 1: Interviews, brief“ were used. The most common port on an embedded system is the Ethernet network port and the second most common is the USB port. Internal devices such as CPU and RAM have to be tested. Usually this is done by simply booting the system. The RMA (Return to Manufacturer Authorisation) analysis showed that there is a need to fully load CPU and RAM to verify that the power electronics can provide enough current and that the cooling is correctly mounted, see “APPENDIX 6: RMA STATISTICS“. Other observations regarding power electronics were that tests are done without fully loading the power circuits in some cases. The voltage of the power electronics is tested, but not to specified current level. When referring to the USB 2.0 specification in system requirements it also includes the ability to provide 500 mA of current per port. There are no tools to test this and therefore it is not tested. This is not an RMA problem but it is a potential problem that needs attention. A power supply that works well under low load might drop in voltage when suddenly loaded to specified limit. A voltage drop can in a mild case only cause the system to reset. In worst case, a voltage drop can cause a current increase that can damage components. Other issues are the time needed to disassemble a product in order to plug in internal test-OS HDD or the time consumed by booting the customer-created OS in order to test OS functionality and applications.

5.2.3 Re-use of test There are two areas regarding the re-use of tests. The first is the major aspect that the Operations department does not use the same tests as the Solutions department. During development the developers usually use their own custom scripts that are easy to run one by one but hard to automate since the result of the test needs to be interpreted manually. The second is the re-use of test between different products. Today the tests are done in many different ways. Some systems have their own test-OS and test routine that has been specified either by the developer at Data Respons or by the customer. Some products have strict regulations on exactly how the test is to be carried out and in which order the tests are executed. There is no general method today on how to automate tests.

5.2.4 Development testing In the beginning of development, testing and evaluation is to some extent done by trial and error. This method often lacks logging of the tests. This makes it hard to evaluate the testing method. It is also very difficult to separate hardware errors from software errors since both hardware and drivers are new and untested. There are very few automated procedures used during development.

49 (85) 5.2.5 RMA analysis An analysis was made to better know what was wrong with products that came back for repairs. This was done by reviewing a years worth of RMAs (Return to Manufacturer Authorisation) for four different products, almost 100 pcs per product. There was no tool for creating statistics on what was broken in the products that were returned, so it was done by hand. This was also the procedure used by the Operations manager to gather RMA statistics. The conclusions drawn from examining the RMAs were that there are around 20 different errors that cause failure per product. One or two of these errors usually cause about half of the failures. The rest of the errors that cause failures occur from 1 to 6 times. The common errors are often construction errors that surface after some time. The conclusion is that all external ports usually need to be tested. The cost and the bad-will of products failing inside their lifespan or products that are DOA (Dead On Arrival) are too severe. Many of the errors that appear after some time are very hard to find using normal testing in a lab- environment. These quality problems does not easily appear on rugged systems designed to be used in extreme environments. The result also implied that the power-electronics should be better tested. 5.3 Delimitations General • Information flow between the Operations and Solutions departments regarding already existing tests will not be handled. • Test routines that does not directly involve the test-SW or the test box created in this master thesis will not be handled. • No deeper analysis of problems regarding administrative issues will be made. Focus is on solving known issues regarding tests. Users • The end user of the test box and the test-SW will primarily be Operations and Solutions departments in Stockholm/Kista. Environment • Problems regarding tests involve only tests that are done or can be done in-house at Data Respons Kista, e.g. no mechanical or EMC testing will be done Documentation • Existing documentation and logs from previous tests will not be arranged, improved or sorted. • Existing documentation on how tests on specific systems should be carried out will not be arranged, improved or sorted. Routines • Existing routines on how test-documents are created will not be changed. • Only routines and documentation regarding the result from this master thesis will be created. Performance

50 (85) • Only performance issues regarding test automation have been mentioned and will be treated. • The lowest system specification to test in terms of CPU speed and memory is the Cash Guard system (ARM9 processor at 200 MHz, 256 MB RAM and 100 MB available storage). Required tests should be able to run on this system and anything above with aspect to hardware specifications.

51 (85) 6 THE DESIGN PROCESS

This chapter describes the design process. It is the practical process of coming up with a concrete and detailed picture and description of how the final product should function and how it will be constructed. 6.1 Solution mapping When starting the design process the different aspects of and problems regarding tests were analysed, see “APPENDIX 5: SOLUTIONS MATRIX“. Only solutions with high score and within reasonable cost were evaluated. For example, techniques such as X-ray and bed of nails were considered too expensive to test and implement. The areas of improvement were mapped to three substantial solutions. • Develop a multi platform test program • Boundary-scan solution • Power-electronics tester (E-load) The three solutions are referred to as the test solution package.

6.2 Solution coverage

6.2.1 Covered areas The test solution package was mapped to the test area map, see chapter 4.2 “Areas of testing“. As can be seen in Illustration 18 “Solution package test area “, most of the blocks are covered. The multi platform test program covers the application, OS, drivers, and part of the robustness and signal quality according to the functional test method (see chapter 4.3 Functional test). The Boundary-scan test method completes the logical circuitry tests block and the power electronics block is completed with the electronic load.

Good coverage Applications OS, drivers

Partial coverage Logic Power Signal Robustness EMC circuitry electronics Quality

Poor coverage Mechanics Illustration 18: Solution package test area

6.2.2 Areas not targeted Tests regarding signal quality and EMC are usually covered during the development process and there are no obvious internal improvements to be made in this area if not a internal EMC lab is to be deployed. The developers use an oscilloscope to test the signal quality. Mechanical tests can be done in the climate and vibration test chambers at Data Respons' Oslo office in Norway. The part of robustness tests that can be tested with the test solution package is burn-in tests on

52 (85) application level and power electronics loading. The partial coverage of signal quality is verified by running full speed test of internal and external devices.

6.2.3 Total test coverage When adding the test solution package, which is the solution proposed and partially implemented in this master thesis, to the existing tests done at Data Respons, and accepting that mechanical tests, robustness and signal quality tests are done only in the development phase, the total in- house test coverage of a DUT is good and can be seen in Illustration 19 “Data Respons tests area coverage“.

Good coverage Applications OS, drivers

Partial coverage Logic Power Signal Robustness EMC circuitry electronics Quality

Poor coverage Mechanics Illustration 19: Data Respons tests area coverage

6.2.4 Test program platform coverage Functional tests coverage is also dependent on hardware platform and operating-system compatibility. If not all major hardware platforms and operating systems are covered, it is not a suitable solution. A generic test platform was repeatedly requested, see chapter 2.2 “Summary interviews“. The multi platform test software has to work on the most common platforms developed at Data Respons which can be seen in Illustration 20 “Target platform test coverage“. Solving this issue is covered in chapter 6.5.6.1 “Choice of programming language“.

53 (85) Windows Linux BSD Custom OS

Micro ARM Intel Custom Architecture controller Illustration 20: Target platform test coverage

6.3 Early stage user manual In addition to the system requirements at the first stage, a user manual was created. By starting with the creation of the user manual, the users of the product can easily evaluate the idea and give important input to the design and functionality, see “APPENDIX 12: USER MANUAL“. 6.4 Test System specifications This section describes the overview for a test system to be used by Data Respons. The test system is a package of test improvement solutions that complete the test situation at the Operations and Solutions department at Data Respons Stockholm. The solution package was presented to Data Respons with positive feedback see, “APPENDIX 10: DESIGN EVALUATION MEETING “. In order to further limit the work, the automated test-program, was scaled down to a test execution framework. The test execution framework is a software that executes test code on a DUT. To get a clear view of the test system package, it can be represented as a dependency map linked to the DUT, here seen as after the selected approach , Illustration 21 “Test system overview“. The image shows the main blocks and their dependencies with the DUT at the bottom. Green areas represent the individual studies for Magnus Dormvik (Boundary-scan) and Gustav Leesik (Electronic load), blue areas represent the shared work and yellow areas represent lower priority or further work, grey areas represent existing blocks.

54 (85) Legend External port logger Part of master thesis Existing parts Log File Further work Automated test-program Individual studies For DUT OR (OS independent)

SQL test-log database OEM OS for Multi-platform device under test test OS

Boundary- Jumpstart server Electronic with OS and Bootable scan Internal storage test-system load Program USB stick installation

DUT (Device Under Test)

Illustration 21: Test system overview

The different parts of the existing test-platform (see Illustration 21: Test system overview) exist in physically different parts of the company. The test-program is a moveable software designed to run simultaneously on multiple machines. The Test-OS is a PXE (Preboot eXecution Environment) bootable jumpstart OS provided as a service in the Operations network. The external port loggar is supposable a hand-held digital logging device plugged in to the DUT. The Boundary-scan software is a laptop software with a JTAG bridge plugged in to the TAP (Test Access Port) of the DUT. According to advice from the supervisor at KTH, Sagar Behere, only the Green and Blue parts will be designed and implemented.

Automated test-program The functionality of the automated test-program was reduced to a test execution framework as explained in chapter 6.5 “Test execution framework, TEX“. According to the risk assessment in “APPENDIX 7: RISK ASSESMENT“ there was a great risk that the solution would be unusable if it is not dynamic enough or too complicated to use. Therefore, the solution was reduced to a software that automates actions that can be done on a computer/system, not to create the actual test executables. These test executables are created/improved as new systems are developed. The test execution framework provides a way to automate the procedure of running the tests, presenting the result and logging the test results.

55 (85) External port logger External port logger is a hardware tool for measuring voltage levels on custom I/O ports. This is done with a changeable port connector that allows the testing of different connectors such as DB9 and RJ45 running different protocols such as RS232 and Ethernet. An additional device is used to load the port with specified current. Voltage levels are measured and communicated to the Automated test program's logging system. The boundary-scan test system is a JTAG TAP (Test Access Port) to USB bridge with software for generating automated boundary-scans from PCB net-list. 6.5 Test execution framework, TEX TEX (Test execution framework) is a multi-platform, low system requirements application capable of automatically executing command-line programs. In this case small tests that verify the functionality of specific hardware (CLI, Command Line Interface tests). TEX runs on multiple operating systems and parses the answer from command-line or from file, interprets the answer and saves the result to a log file. The strength of the solution is that it can be used in possibly every system that Data Respons is testing and will test in the near future. The solution also makes it possible to re-use the small and highly manual CLI tests that the developers use to verify the DUT in the early stages of board- bring-up. This is done by running these CLI tests in TEX and comparing the result to the expected output of the CLI test. To design this test execution framework according to the V-model, the first task is to create requirements.

6.5.1 Requirements The requirements were based on “APPENDIX 5: SOLUTIONS MATRIX“ The test system requirements were approved by the Operations manager. See APPENDIX 14: TEST SYSTEM SPECIFICATION. “Shall” means that it is an absolute demand. “Should” means that is a desired feature but not mandatory.

6.5.1.1 Functional requirements of TEX 1. Shall provide an automated way to run tests on commonly used ports and devices. 2. Shall generate relevant logs and save them to an appropriate location for archiving. 3. Shall be able to generate test reports tailored to the devices being tested. 4. Shall be able to run in terminal mode. 5. Shall be able to load system configurations for different systems and handle versions of configurations. 6. Shall be possible to upgrade the test program to run on new hardware and OS. 7. Shall run on Windows Embedded / XP / 7 and Linux. 8. Shall run on ARM9, Intel and AMD hardware platforms. 9. Shall be able to execute a command after other tasks has finished.

56 (85) 10. Shall be able to execute a task after given moment in time. 11. Shall provide a way of logging non automated test-procedures. 12. Shall provide automated or manual way of entering a DUT's serial number. 13. Shall provide automated or manual way of entering date and time. 14. Shall provide manual way of entering test-person. 15. Should be able to list common devices and run test according to test requirements. 16. Should be able to push log-files to a SQL log-file database. 17. Should be able to load pre-configured custom tests from repository.

6.5.1.2 Non Functional requirements of TEX 1. Shall run on low disk-storage test-platforms (minimum 10 MB). 2. Shall be able to run on systems with minimal system RAM (minimum 32 MB). 3. Shall be easy to maintain. 4. Shall be easy to create automated testing of new systems. 5. Shall have a settings menu with required options. 6. Shall be able to configure settings regarding log-files location and logging-level. 7. Shall be easy to re-use old tests and modify old tests.

6.5.2 Limitations in TEX 1. Shall not include any built in tests, only a framework to execute predefined test from CLI. 2. Shall not include a GUI.

6.5.3 Documentation requirements 1. Shall include a user manual. 2. Shall include a guide for test-script generation. 3. Shall include code-blocks explanation.

6.5.4 TEX functionality The test execution framework is an interactive command line interface program that can be run from a terminal such as cmd.exe (Windows) or BASH (Linux, UNIX). The framework contains functions to run external tests (device tests), create device test lists (system tests) and run the device test according to the device test list in an automated manner. The result of the device tests are presented in short on the command line and saved in detail to a log file. The device test lists can be a list of tests that should be run to test a particular DUT according to the test specification of that system. If a device test result is PASS, the next device test is executed. If a device test fails the operator is presented with the options on how to proceed, see Illustration 22 “Test execution framework-flowchart run mode“.

57 (85) 6.5.4.1 Running tests The frameworks main mode is the “run” mode used to execute a specific test list. It reads the device test lists and executes the commands that are specified in the device test package. These tests can be programs already installed on a DUT Operating system or provided with the test package. The result is compared with the expected output provided in the device test package. When a system test is executed by selecting the run option in the main menu, the framework reads the configuration file ( last.conf ) which contains the test user name and the test system name. The test user name is the name of the responsible test operator and the system name is the file name of the system test bundle to be executed. The framework then opens the file dut.conf in the folder named after the system being tested. The file dut.conf is the system test configuration file. It contains the device test list and information about the DUT such as hardware configuration and revision that is used as a header in the log file together with date, time and test user information from the configuration file last.conf. The flowchart Illustration 22 “Test execution framework-flowchart run mode“ was created to decide the work-flow of the software. The flowchart went through many iterations before taking present form.

58 (85) Illustration 22: Test execution framework-flowchart run mode

59 (85) 6.5.4.2 Creating device tests Device tests are independent test packages that can be run as a single test or as a part of a device test bundle with several device tests which is defined as a system test. The creation of device tests is done by manually copying a device tests template folder. The folder includes the configuration file test.conf and a manual.pdf that explains the procedure and the options available to create a device test. Depending on the specific needs for the test, several additional files can be included. For example executables needed to complete the test, see Illustration 23 “Device test folder“.

Illustration 23: Device test folder

The test is executed and interpreted according to the configuration file test.conf. It contains brief information about the device. CMD: The command to be executed. CMDOUTPUT: Where the result of the execution can be accessed. CONDITION: How to compare the result. REFERENCE: What to compare the result to. See Illustration 24 “Test.conf file“.

Illustration 24: Test.conf file

By using the custom input-output-parse approach, any command-line test can easily be automated by simply pasting the result of the execution to the reference file or typing the result directly in the configuration file if the result is short. As an example, the command “ping” in the

60 (85) Microsoft cmd.exe environment will contain the string “reply” if host exist. Automating this test is done by configuring the test.conf file as:

CMD:ping 192.168.0.1 CMDOUTPUT:stdout CONDITION:contain REFERENCE:reply

One of the most common way to test external devices is by sending a data to a device equipped with loopback plugs and comparing the sent data to the received data. In addition to testing, the structure of test.conf allows automation of installation during tests. Some systems needs to be manually programmed with unique MAC addresses. This can be automated while at the same time confirming that the programming of MAC address went well. Regular expressions will be used to parse data from the tests. Regular expressions (regexp) is a common method to search for an compare strings or text.

6.5.4.3 Creating system tests The creation of system tests is done by selecting “Create new system test” in the main menu. An interactive procedure guides the user to name the system test and adding type, description and revision number. A folder structure is then created with the necessary folders and configuration files, including the file dut.conf, see Illustration 26 “USB folder view“. The user is then prompted to add device test packages to the “run” folder. The folder is scanned and the tests are added to the file dut.conf. Device tests can also be added by manually by editing the file dut.conf and adding device test packages to the “run” folder, see Illustration 25 “Test execution framework-flowchart, Create mode“. This approach was believed to be the most flexible. The idea is to enable the import of existing tests. Most of the tests that are required to perform a full test of a system can be imported from a test repository where device tests from other system tests are stored, see Illustration 27 “SVN Folder view“. The system-specific device tests can then be manually created by using the imported tests as templates and by consulting the manual.

61 (85) Illustration 25: Test execution framework-flowchart, Create mode

62 (85) 6.5.5 File structures, USB and SVN The file structure of the test execution framework exists in two different places, the USB file structure and the SVN file structure. The USB file structure can be located on any removable media with write access.

6.5.5.1 USB File structure The USB file structure is where the system test is executed from during tests. The USB file structure can be seen in Illustration 26 “USB folder view“. The root of the media contains the following files and folders: • Test execution framework executable tex.exe • The file last.conf which contains data on which system test that was run last time and the name of the operator. • A folder e.g. Product_name_revXX_verYY, containing all the device tests included in the system test.

63 (85) Illustration 26: USB folder view

The folder setup is designed to be easy to understand and modify. The root of the folder tree contains the test execution framework (TEX) executable file tex.exe and the configuration file last.conf. Within the root folder there are system tests folders. It is possible to have more than one system tests folder in the root. The selection of which system test to execute is done in the main menu in TEX. Below the system test folder are the device test folders (device test packages) containing the necessary files to run device test as described earlier in chapter 6.5.4.2 “Creating device tests“. The USB folder setup allows the user manually add and remove system tests and device tests within the system test. New system tests folders are directly selectable from the framework. New device tests can be also be selected from the framework without editing any configuration file. If the user prefers to edit the test configuration files manually, it is quite easy since they are written using a simple and readable syntax. TEX also informs the user that the info in the configuration file dut.conf can be edited after import of device tests in order to customise and change order of

64 (85) the device tests. This dual way approach was chosen to fit the two different ways of working at the Solutions department and the Operations department.

6.5.5.2 SVN file structure The SVN (Subversion) file system area has a folder near the root which is the repository for different device tests. The test repository which is indexed by OS at 1st level and device at 2nd level. This makes it easy to find old tests that can be used for a new system. A simple approach could be to drag-and-drop all the device test from the repository to the “run” folder, try them all in TEX and remove not working device tests. According to 6.5.1.2 “Non Functional requirements of TEX“ the ability to re-use tests is a key feature. To achieve this, a number of aspects needed to be taken into consideration. Where to store tests, how to name them and how to make sure they are up to date. The analysis of these aspects was then transformed to a work-flow which is explained in chapter 6.5.5.3 “Test execution framework work-flow“ and illustrated in Illustration 28 “Framework workflow“. The work-flow requires that a device test repository exists, product revision handling is easily available to the developers and is maintained by the developers. Data Respons uses SVN to store all system development documentation. Tests developed for a specific product is stored in the test sub folder within each product folder, see Illustration 27 “SVN Folder view“.

65 (85) Illustration 27: SVN Folder view

66 (85) 6.5.5.3 Test execution framework work-flow Solutions engineers are responsible for the creation of device tests and system tests. The developers use their existing document platform Subversion (SVN) to conveniently store the tests in appropriate parts of the SVN, see chapter 6.5.5.2 “SVN file structure“. They can access previous device tests by either getting them from a folder belonging to a previous version of the hardware or a similar hardware in SVN or use the repository to access the device tests that is available for the particular OS. When tests are imported and the tests that has to be developed specifically for the system are created, the new device tests are uploaded to the SVN repository. The system test package is approved according to system requirements and the requirements for production testing at the operation department. A release of the system test is uploaded to Arena (BOM and change management software) [ARENA] and approved. Operations personnel can now easily download the complete test package from Arena to a mobile media and run the test. This approach also forces the involved developers to confirm changes in a system test. A change to a file belonging to a product in Arena needs to be confirmed by the owner. Thus, the developer gets feedback on what needs to be changed in a future version.

Illustration 28: Framework workflow

67 (85) 6.5.6 Code generation Flowcharts describing how functions and classes in the code should interact with each other were created from the flowcharts describing the user interface and input-output of the program, see Illustration 22: Test execution framework-flowchart run mode and Illustration 25: Test execution framework-flowchart, Create mode.

6.5.6.1 Choice of programming language The programming language selected was C++, extended with the Qt library. During the initial design process, several solutions to automate tests were evaluated see “APPENDIX 5: SOLUTIONS MATRIX“. The following programming environments and languages were considered. • National Instruments (NI), Test Stand It was introduced by a technical sales support person from NI. This solution required a runtime environment package to be installed on the system running the code. This package would require about 1 GB of storage space and is therefore not possible to install on a lightweight embedded system with only 100 MB of storage (Cash Guard). • JAVA and Python Also requires a runtime package to run. Capable of cross compiling to Windows and Linux. Well proven languages. • C++ with Qt library Does not require a runtime environment. Possible to cross compile to Windows and Linux. Makes executable files of small size. Well proven language. [NOKIA1] The test execution framework (TEX) had to be created from scratch, with the requirements that it should be able to run on many platforms and operating systems. A small binary package that would fit the smaller storage of some embedded configurations was necessary. There were also requirements regarding time to test and the requirement to use the original state of the DUT OS when testing. The requirements made it hard and time consuming to install any runtime environments needed for NI, JAVA or Python. There are solutions to both Python, JAVA and C++ on how to make executable single file binaries. The supervisor, Sagar Behere, of this master thesis also recommended C++ with Qt. The possibility to get support from him added to the benefits of C++ with Qt. The 32bit Qt Development platform “” [NOKIA2] was used. In order to simplify the file structure and eliminate any dependency issues on the target system a static build library was created. Using the static build library when compiling the application results in a single executable binary file that does not require any additional files to run.

68 (85)

6.6 External port logger

6.6.1 Description The design process of the system followed some milestones • Writing a manual • Flow chart of how the software works • Flow chart of how the hardware works • Choice of coding language The external port logger is supposed to be a dongle small enough to be hand held. It is connected to the USB port of the DUT (or device running TEX). It is should have several voltage measuring ports. These ports will measure voltage levels of communication ports and current level of the electric supply. The voltage and current levels will be logged together with the other tests. It will be controlled by TEX just as the device under test. Based on “APPENDIX 5: SOLUTIONS MATRIX“ and the Test System Specification the requirements were specified. The test system specification was approved by the Operations manager. See APPENDIX 14: TEST SYSTEM SPECIFICATION.

6.6.2 Functional Requirements “Shall” means that it is an absolute demand. “Should” means that is a desired feature but not mandatory. Shall connect and communicate with the DUT via USB Shall be able to send data to test program on the DUT Shall be able to measure voltage levels of common ports. Shall be able to draw nominal current from common ports to simulate long cable and load. Shall be possible to add new measurement ports Shall be designed for a fairly large number of test ports. Shall be able to choose which ports are active

Should be able to measure frequency of for example clock signals Should be designed for hard physical treatment Should be able to measure and log current drawn by DUT in real time. Should be able to measure and log temperature with probe in real time. Should be configurable to create custom tests with aspect to voltage-level, current-level, time and frequency Examples of measurements:

69 (85) Test serial RS232 voltage levels Test serial RS485 voltage levels Test Ethernet voltage levels USB speed test and 500 mA load on all ports VGA voltage levels Power consumption of system

6.6.3 Non Functional requirements Shall be possible to calibrate Shall be mobile Shall be powered via USB Shall be easy to use Shall be possible to reproduce

Should be quick to connect Should be designed for electric overload 6.7 Boundary-scan The Boundary-Scan requirements were specified to be a part of solution for the problem “It is difficult to isolate the faults during product development” See Chapter 5 PROBLEM DEFINITION and refined in the solutions matrix. See “APPENDIX 5: SOLUTIONS MATRIX“ . The requirements were then to solve the known problems that the Boundary-scan test method was able to solve according to the pre study of the method. 6.8 Electronic Load

6.8.1 How an electronic load works This section will describe how an E-load works in general.

6.8.1.1 Galvanic isolation When applying a step load to a power circuit. The logical level of the ground signal might fluctuate. This can cause bad measurements and might damage components. Therefore it is important to electrically separate the low voltage logic and the power electronics. This is called galvanic isolation. To still be able to transfer information and power between the two systems, opto couplers and transformers can be used. Some current- and voltage probes that are available as integrated circuits, are galvanically isolated and constructed to give a linear response. See Illustration 29: Galvanic isolation in the measure- and control loop.

70 (85) Illustration 29: Galvanic isolation in the measure- and control loop

71 (85) 6.8.1.2 Using a MOSFET to load a power circuit A MOSFET (metal oxide semiconductor field effect transistor) can be operated in two areas, In the non-linear area (0 < VDS < 3 and 0 < ID < 20 in Illustration 30), the current is hard to control. For most applications VDS will be greater than 3 V and therefore only depending on VGS.

Illustration 30: Example of current flow through a MOSFET as a function of gate-source voltage and drain-source voltage.

6.8.1.3 Controlling a MOSFET with a PWM signal A micro controller can be used to emit a PWM signal (Pulse Width Modulation). This signal is connected to the gate of the MOSFET. If the PWM is smoothed by using capacitors, the mean value will be between 0 and 8 Volts and the current drawn between 0 and 20 Ampere.

6.8.1.4 Setting the PWM in a micro controller A PWM in a micro controller is often controlled by setting a timer on free run. The timer will then count from zero to is maximum value, here called ovf, and then start over again. This maximum value divided by the speed of the timer becomes the period of the PWM. The PWM signal is set high when the counter reaches zero. To adjust the duty-cycle of the PWM; a register, here called top, is set to a value between max and zero. When the timer reaches top the signal is set low. The Duty-cycle is the percentage of how much of the period the signal is high.

72 (85) 6.8.1.5 Constant resistance If it is desired to emulate a resistive load, the E-load can be set to constant resistance. See Illustration 31: Graph showing constant resistance and Equation 1Setting of PWM signal for Constant Resistance.

u

i Illustration 31: Graph showing constant resistance

V DS +D ∗ovf ( R ) top= V pwm∗C Equation 1: Setting of PWM signal for Constant Resistance

6.8.1.6 Constant current If it is desired to emulate a constant current, the E-load can be set to constant current. See Illustration 32 and Equation 2.

u

i Illustration 32: Graph showing constant current

( I D+D )∗ovf top= V pwm∗C Equation 2: Setting of PWM signal for Constant Current

73 (85) 6.8.1.7 Constant power

u

i Illustration 33: Graph showing constant power

P +D ∗ovf ( V DS ) top= V pwm∗C Equation 3: Setting of PWM signal for Constant Power

74 (85) 7 TEX IMPLEMENTATION This chapter describes how the Test Execution Framework (TEX) works. The code implementation is described by how TEX work in reality and what features are implemented. 7.1 Test execution framework (TEX) implementation

7.1.1 Laboratory test of TEX The beta release of TEX was demonstrated to developers and engineers at Data Respons. It was run on an embedded system (ABB PS700) running windows XP. TEX performed as specified. • TEX.exe and its .dll files were put on a USB flash drive • TEX was run on a standard PC (Illustration 34: TEX main menu)

Illustration 34: TEX main menu

• A new product test was created (Illustration 35: TEX, "Create new product test"- function)

Illustration 35: TEX, "Create new product test"-function

75 (85) • Folders containing device tests were added to the test folder (Illustration 36: TEX, folder scan)

Illustration 36: TEX, folder scan

• The newly created test is selected as active test (Illustration 37: TEX, list of available product tests (only one available))

Illustration 37: TEX, list of available product tests (only one available) • The DUT (ABB PS700) was connected to a display, mouse and keyboard. (Illustration 38: The DUT, PS700 made for ABB)

76 (85) Illustration 38: The DUT, PS700 made for ABB

• The DUT was powered on, and the USB flash drive with TEX and the test folders was mounted • TEX was started on the DUT (looks the same as on the PC) (Illustration 34: TEX main menu) • The product test for PS700 was selected and run. (Illustration 39: TEX, running a product test on PS700)

77 (85) Illustration 39: TEX, running a product test on PS700

• One device test (disc mount, incl. USB) was planned to fail (Illustration 40: TEX, failed device test)

78 (85) Illustration 40: TEX, failed device test

• Test was continued with the error logged. The last test passed and the product test ended. When the test is over, the main menu is displayed again. (Illustration 41: TEX, fail menu and end of product test)

Illustration 41: TEX, fail menu and end of product test

79 (85) • The test is documented in two log files. One brief (Illustration 42: The brief log: main.log) and one detailed (Illustration 43: The detailed log). The brief log file is appended with each executed product test. The detailed log file is individual for each executed product test and is supposed to be a tool for trouble shooting a malfunctioning product.

Illustration 42: The brief log: main.log

Illustration 43: The detailed log

80 (85) 8 BOUNDARY-SCAN PRODUCT EVALUATION

This chapter describes how Boundary-Scan work in reality compared to what functionality is provided in theory by the different standards. The content of this chapter is built on the product evaluation reports in Appendix 20 and Appendix 21. A boundary-scan product evaluation was done to analyze what functionalists are available in the products, compared to what in theory is possible from the different standards available. The target board selected for testing was the ABB PS700 FPGA board. This board was selected because it had a boundary-scan chain connected to nine compatible devices.

8.1 General functional overview There are many boundary-scan products available. The products are normally expensive with the price tag starting at $20000 for a automated solution. To do manual testing there are some low cost alternatives. One automated solution and one manual solution was tested and evaluated.

Product View set states Test coverages statistics NET import

XJTAG Full Full functionality X functionality

TopJTAG Full limited Only .nl nets functionality

Xilinx none none none

8.1.1 XJTAG evaluation XJTAG provide a complete setup for testing PCBs. It includes tools to run manual tests (XJAnalyser), create custom automated tests (XJDeveloper), create test packages for specific systems (XJRunner). To quickly get started it also includes well made documentation and tutorials. See APPENDIX 8: XJTAG PRODUCT EVALUATION.

8.1.2 TopJTAG evaluation The TopJTAG probe is a boundary-scan software that implements the functionality of the IEEE1149.1 standard. The product consists of a single software binary that can be used to manually do boundary-scan tests. This, in short, means setting states and reading states on pins. The boundary-scan compatible devices are presented in a graphical view with all pins colour- coded according to pin state. TopJTAG probe also enables the user to import pin-names of the JTAG compatible IC, which can be seen as a light version of BOM list import since names and values of the pins often are specified in these files. TopJTAG is a simple and intuitive application that provide a manual method of testing the connections on the PCB. By importing BSDL files the program presents the boundary-scan devices in a graphical way, similar to the physical design. The program uses the IEEE1149.1 but

81 (85) does not provide any way of running ISP programming which is partly supported in the standard. This feature is provided in the "TopJTAG Flash which is distributed separately. The program does what is expected in a simple and understandable way. The test is covered in the Xjtag test report, see APPENDIX 9: TOPJTAG PRODUCT EVALUATION.

8.1.3 Xilinx SPI The Xilinx SPI programmer was the programmer used by the developers to program the FPGA on the PBC. The TAP connector on board was designed for this programmer and it was there for intuitive to investigate the possibilities of this programmer with provided software. In order to do this the Xilinx Platform studio was installed. After searching the provided manuals and guides and researching the internet it was found that it did not provide any of the Boundary-scan functionalities exept the SPI standard IEEE1532 that enables the programming of devices over the boundary-scan chain.

82 (85) 9 RESULT This chapter contains the results of how the achievements meets up to the goals from the Advice for decision In chapter 3. 9.1 Approach goals The result of the solution package was compared with the goals defined in “Advice for decisions”. See 3.2.2 Approach. The ability to quickly make physical layer tests is covered by implementing one of the boundary- scan solutions and the use of an Electronic load. This way, both logical and quantitative tests can be made of the physical layer. The ability to speed up repetitive tests of multiple units is achieved by implementing the test execution framework. The tests are custom made to achieve higher automation. The test developer is not locked in a test product with limited functionality. The ability to minimise errors caused by monotone tasks for the tester is also achieved by implementing the test execution framework. Monotone tests are automated in a higher level. The ability to make tests traceable and repeatable is achieved by using a content management system and check in versions of the test execution framework, according to the TEX manual. Automated documentation of tests is managed by the test execution framework. 9.2 Solution package evaluation

9.2.1 Test execution framework evaluation According to the V-model, the system requirements were checked against the functionality of the software (TEX). Some requirements were not verified and some were not met. Requirements that were not met are number 10, 15 and 16 (see 6.5.1 Requirements). The TEX itself cannot execute a test at a given moment in time but it can execute a device test that is custom built to execute at a given moment. The ability to list common devices and log to the SQL database was not implemented because of time and priority. The untested requirements are number 7 and 8. Tex was not compiled and tested on Windows XP embedded but is is highly probable that it this will work since it runs flawlessly on Windows XP and Windows 7. Tex was not compiled to the ARM platform. The non function requirement number four regarding settings menu was not met literally, instead the required options are made in the configuration files for the test. All other requirements were verified and met and some improvements were made on the automated test result interpretation thanks to the implementation of regular expressions instead of statically created compare-scripts.

9.2.2 Boundary-scan evaluation The boundary-scan method does provide the tool necessary to distinguish wether an error is caused by software or hardware. The testing technique also provide convenient methods for production testing by implementing automated boundary-scan test. When most of the automation was done on the XJTAG test software the total test coverage was 73.5%. This can be considered a fairly high test coverage since the PS700 board was not designed for boundary-scan testing. The test is covered in the XJTAG test report, see

83 (85) APPENDIX 8: XJTAG PRODUCT EVALUATION. If the cost of an automated solution is to high, a manual solution can provide much of the functionalities but requires more time for testing. The TopJTAG software was evaluated as a simple and intuitive application that provide a manual method of testing the connections on the PCB. The ability that basic boundary-scan provide to manipulate pins on ICs that normally would not be accessible, such as BGAs and LGAs, is supported in this test suite. Working with error identification on a faulted PCB becomes a lot faster since the logical testing can be done entirely in a GUI by selecting the pins on the boundary-scan devices and setting a desired state, no oscilloscope and probe positioning is required. The test is covered in the TOPJtag test report, see APPENDIX 9: TOPJTAG PRODUCT EVALUATION.

84 (85) 10 FURTHER WORK This chapter lines up the content of the gap between the goals from chapter 3 and the content from the result in chapter 10. 10.1 Test execution frameworks implementation The test program needs to be implemented in the organization to get a proper evaluation of the solution. The program, TEX, also needs to be completed with debugging and added functionality such as SQL logging. The ability to list common devices would be a useful feature to implement as further work. To make sure TEX is working as intended, several more test-scripts need to be created and run on different devices. These products then needs to be tracked during their lifetime to see if the rate of malfunctioning products goes down. The tests also needs to be re-run after some time to make sure the concept of re-running old tests is working as intended. 10.2 Boundary-scan product implementation The test method needs to be implemented in the organization to get a proper evaluation of the solution. In order to do this, the developers at Data Respons needs to get information about the advantages. It would also be recommended to further test other Boundary-scan test software. 10.3 Electronic load implementation The test method needs to be implemented in the organization to get a proper evaluation of the solution. The importance of properly loading power circuits need to be known to developers and test personnel. 10.4 Routines A company-wide test strategy needs to be implemented. The strategy needs to range from system design, through development to production and back again. Especially the feedback from production and repairs to the developers is not good enough. These kind of improvements were part of the Top-Down approach, which was not pursued in this master thesis.

85 (85) 11 REFERENCES

References [Park1]: Kenneth P. Parker, "In-Circuit Testing", pp 5-7, THE BOUNDARY-SCAN HANDBOOK, 1998, ISBN: 0-7923-8277-3 [Fein1]: FEINMETALL GmbH, "Principle of ICT test fixture", www.feinmetall.com, 2011, http://www.feinmetall.com/fileadmin/feinmetall/frameset_eng/frameset.htm [GOEP1]: Heiko Ehrenberg, "In Circuit test", www.goepel.com, 2007, www.goepel.com [Spea1]: SPEA S.p.A., "Flying Probe Testers", www.spea.com, 2011, http://www.spea.com/ATEforElectronicsInd/FlyingProbeTesters/tabid/108/language/en- US/Default.aspx Doi1: Hideaki Doi, Yoko Suzuki, Yasuhiko Hara, Tadashi Iida, Yasuhiro Fujishita, Koichi Karasaki, , Real-time X-ray Inspection of 3-D Defects in Circuit Board Patterns, 1995 QUAN1: JI-QUAN MA, FAN-HUI KONG, PEI-JUN MA, XIAO-HONG SU, DETECTION OF DEFECTS AT BGA SOLDER JOINTS BY USING X-RAY IMAGING, 2005 Said1: Asaad F. Said, Bonnie L. Bennett, Francis Toth, Lina J. Karam, Jeff Pettinato, Non-Wet Solder Joint Detection in Processor Sockets and BGA Assemblies, 2010 [Park2]: Kenneth P. Parker, "In-Circuit Testing", pp 8-32, THE BOUNDARY-SCAN HANDBOOK, 1998, ISBN: 0-7923-8277-3 [XJTA1]: XJTAG, "Introduction", www.xjtag.com, 2010, www.xjtag.com/docs/XJTAG_DFT_Guidelines.pdf [Gope1]: Heiko Ehrenberg, "In Circuit test", www.goepel.com, 2007, www.goepel.com [Alte1]: Altera Corporation, "IEEE 1532 Hardware Programming Standard", http://www.altera.com/, 2011, http://www.altera.com/products/devices/max7k/features/m7k- ieee1532.html [IEEE1]: IEEE , "1581-2011", http://ieeexplore.ieee.org, 2011, http://ieeexplore.ieee.org/Xplore/login.jsp?reason=login&url=stamp%2Fstamp.jsp%3Ftp%3D %26arnumber%3D5930310 [IEEE2]: IEEE, "1149.7-2009 ", http://ieeexplore.ieee.org, 2009, http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5412866 [ARENA]: , "", , , www.arenasolutions.com [NOKIA1]: , "", , , http://qt.nokia.com/qt-in-use/target/desktop [NOKIA2]: , "", , 2011, http://qt.nokia.com/products/developer-tools

86 (85) APPENDIX 1: Interviews, brief

11.1 Sammanfattning av intervjuer av utvecklare och montörer på Data Respons Intervjuerna är gjorda av Gustav Leesik och Magnus Dormvik under Juni 2011. 11.2 Kista

11.2.1 Sammanfattning På kontoret i Kista finns avdelningarna Sälj, Admin, Utveckling och Operations. I Kista konstrueras mest modulbaserade system. Dessa liknar en vanlig dator till uppbyggnad men är specifikt konstruerade för sitt ändamål. De kör vanligen Linux eller Windows Embedded som Operativsystem. Kista är det enda kontoret som har egen slutmontering. Mikko Salomäki är Field Application Engineer Tommy Olsson är Operations Manager

11.2.2 Generellt Vilka roller och individer finns i projekten? Det beror på storleken. • Små projekt, som att sälja något off-the-shelf får sällan någon projektledare utan säljaren sköter det mesta • Mellanstora projekt, här är det ibland utvecklaren som blir projektledare. • Stora projekt är hårdare styrda med personer i alla roller t.ex.: Solution Manager Project Manager Deputy Project Manager Hardware developer Mechanics PCB layout Software developer Purchasing System Architect and Documentation Test, verification & validation

En person kan ha flera poster men alla ansvarsområden är täckta. • Ibland är det samma person som utvecklar som gör acceptanstesten. Hur ser ett typiskt utvecklingsprojekt ut för utvecklare? • Förstudie (ibland ingår den i offerten) • Kravspecifikation. • Systemarbete • Rita blockdiagram • Funktionsbeskrivning • Resonemang kring val av komponenter mm. Hur ser ett typiskt projekt ut när det kommer till Operations? • I vissa projekt så får vi komma in någonstans i mitten för att lösa problem • Ett problem är att det budgeteras för lite tid och pengar för test och dokumentation, vilket gör att det blir lidande. Man räknar inte ordentligt på hur mycket test som behöver göras, och då hur mycket kund ska få betala för test. Då är det svårt att komma i efterhand och höja testkostnaderna. Men det har blivit bättre senaste tiden. • Operations kommer oftast in i projekten när prototyper och förserie skall byggas. De allra första prototyperna byggs ofta av utvecklarna själva, men de beställer material genom Operations. Dessa första prototyper har inte spårbara komponenter. När förserien sedan byggs så är det full koll på alla komponenter. Hur funkar överlämningar till kund? Operations är produktägare till allt som monteras på Operations. Det finns en checklista för vad som skall lämnas över till kund när en produkt ska levereras. Vilka roller och individer finns i projekten? Det är av generell uppfattning att det saknas mallar och rutiner för dels hur man bör driva ett projekt och då även dokumentation gällande test. Det har varit problematiskt att sätta sig in i andra utvecklares arbete då dokumentation av projekt även varit bristfällig. När kommer test-fasen in i projekten? En uppfattning är att man ofta går in ganska seriöst i början av ett projekt och försöker göra det perfekta projektet ”by the book”. Då sätts ofta bra kravspecifikationer och inplanerad tid för test. Problemet är att den tid som krävs för test ofta blir lidande för annat. Testfasen måste alltså få högre prioritet. Hur definieras acceptanskriterierna idag? Tommy vill ha påskriven acceptansspec. från kund i handen. Den definierar test och alla krav som är ställda på produkten. Kravspec. används vid start av projekt, acceptansspec. i slutet. Var kommer dessa kriterier från? i t.ex. PS700 så hade projektledaren Peter intervju/möte med Tommy för att diskutera just funktionstest. Om det står i kravspecen vad som skall testas så har utvecklarna redan konstruerat vissa tester. Annars får Operations (dvs. Mikko) konstruera tester som täcker testbehoven. Det finns inga generella testmallar. Vilka krav har kunden vanligtvis? • Att grejerna funkar enl. spec... • Passiv kylning, låg effekt • CE-märkning • SIL eller andra standarder av typ ISO. Detta leder till att man måste sätta sig in i vad för krav som gäller i dessa standarder. • EMC-krav • Klimatkrav • Vibrationstålighet Hur återkopplas testen mot kraven ställda på produkten? • Kraven som finns i systemspecifikationen omformuleras till krav, ofta ihop med kund. Ibland behöver kund hjälp med denna fas. • Ickefunktionella krav skapas av utvecklaren, ofta ihop med Board Bring-Up, då produkten testas för första gången. • Kontroll / certifiering av t.ex. ISO-krav sker av externt testlhus • Ofta säger kunden att de testar själva • om allt i acceptansspec. skall vara testat eller inte beror lite på om produkten behöver SW som DR inte har, om produkten måste testas i en speciell miljö eller mot andra produkter som DR inte har tillgång till. Detta skall i så fall vara beskrivet någonstans. • Man mäter sällan saker i Operations, man funktionstestar. • t.ex. PS700 har högre krav ställda i acceptanstesten än i kravspecen, detta för att helt säkert klara kraven. • Så länge det finns beskrivet så tittar man på testresultat ↔ kravspec. Det finns inte beskrivet i våra rutiner hur återkoppling hur det skall ske dock. Vad finns det för mallar för att göra test? Var finns dom? Det finns en mall för att skriva testinstruktioner, men det är bara en layoutmall. [S:\Quality Handbook\Templates\DR-SE Project Templates\production] Hur återanvänds testmallar? • Återanvänder inte särskilt mycket mallar. • Frågan är om allmänna mallar går att passa in på projekten. • Man tar från gamla projekt • Utveckling borde ta fram testdokumenten i större utsträckning än nu. Operations borde bara förfina testerna och ev byta ordning på dem, inte ta fram dem från scratch. Används egna mallar? • Ja, egna mallar används som man är nöjd med • Mikko har tagit fram en standardmall för test av datorer. Riskbaserade test? • Ja, det som kunden använder mest är ju viktigast. • Det blir väldigt problematiskt om kunden upptäcker efter ett år att en funktion som används sällan inte funkar. Var den trasig från början eller inte? • Ja, vi har hoppat över test som statistiskt sett ger väldigt få fel om kunden har bråttom. • Kan hoppa över t.ex. 24 h värmetester då de även förkortar livslängden på produkten. Förutsatt att provexemplar har klarat samma test. • Vi vågar inte ta bort enkla tester helt. Tidsvinsten för att hoppa test av t.ex. Ethernet jämfört med hur illa det är om den inte funkar är för liten. • På produkter som monteras i t.ex. Taiwan så börjar vi med att testa alla i första sändningen, sedan kanske 50 % och senare 10 % eller stickprover. Acceptanstest är något som utvecklaren gör på enstaka enheter, det är tuffare och noggrannare test för att prova ALLA krav på produkten (kyla, värme, signalkvalitet, strömförbrukning, osv..) Produktionstest är något som görs på Operations, det är ett ganska enkelt funktionstest där man helt enkelt kör produkten och kollar att den beter sig som den ska. Gör ni fälttest? Vi tittar på de RMAer som kommer tillbaka, annars nej. Det är inte alltid kund kan testa direkt. Om du fick önska hur skulle test gå till? • En testansvarig, dedikerad testingenjör att konsultera vore bra. En standardmall eller lathund kan komma till hands. • Ett närmare samarbete med regelbundna möten mellan kund och DR gör att både krav och koncept träffar bättre mot kundens faktiska önskemål. Det kan nämnas att andra kontor utvecklar en enkel prototypmodell redan innan system requirements skapas för att ha något att prata kring. • Inget speciellt som ska ändras. Möjligen lägga mer tid på test då fel som uppstår i fält blir väldigt kostsamma. • Vill ha testerna mer automatiserade så att den mänskliga faktorn kan elimineras. Det är lätt hänt att missa en punkt av 50 om man suttit en hel dag och testat. Långa checklistor borde köras igenom av en dator. • Man skulle vilja återanvända I/O-test från tidigare produkter, men gör det sällan då man inte har sparat det på ett vettigt ställe, det var skrivet för en annan plattform osv. Vilket resulterar i att man gör om alla slarvfel som man alltid gör när man skriver helt ny kod eller konstruerar ett helt nytt kort. • Det borde stickprovstestas mer i produktion. • Det är ju i testen man kan hitta felen som kan förbättra tillverkningsprocessen. Hur stor del av utvecklingstiden / produktionstiden består av test? • Gällande produktionstest på löpande band så är en uppskattad tid fakturerad. Denna tid är eventuellt kortare men den informationen är inte behandlad. • För produkter monterade av Operations går gissningsvis 10 - 15 % av tiden till test. Ofta 10- 20 minuter per enhet. Då är inte långa burn in-test medräknade då det inte kräver övervakning. • Det som kommer från Taiwan kommer först till DR där det packas upp, testas, packas ned och skickas till kund. t.ex. VTC, den tar ca 20 minuter att testa, packning tar mer tid än man tror. Hur funkar återkoppling till utvecklarna? • Den perfekta återkopplingen om varje fel är en lyx som få företag har råd med. Tommy vill ha det automatiserat, vilket det i viss mån kommer bli i och med införandet av Arena som dokumenthanteringssystem. Ex. en ändring från stålbricka till nylonbricka måste ändras i Arena. Denna ändring kommer utvecklarna vara tvungna att godkänna om de också står som ägare. Annars kommer de se ändringen nästa gång de öppnar projektet. • Kan man tänka på i utvecklingen att t.ex. internt kablage ökar felrisken drastiskt så skulle felen minska. • Nu så ska det skrivas löpande i dokument som utvecklarna skall se i systemet. Men det är inte alltid det funkar så bra. Små förändringar ska Operations kunna hantera och utföra själva. Vad har ni för typ av system för att spara testinformation? • Excel, sedan namnges dokumenten enl. en struktur. • SVN har en testmapp där test hamnar. SVN används på ett seriöst sätt idag. • Projektledaren skall skapa en mappstruktur för nya projekt, det finns skript som gör detta. Det verkar dock inte vara så att detta efterlevs. • Det är oklart om exakt vad som skall sparas gällande test. Hur gör man för att hitta information om gamla test? • Letar man efter bugglistor så hittar man mycket • Man letar i SVN • vet ej • I gamla loggar hittar du testutfall. • HUR testerna gjordes är mycket svårt att få reda på • Arena ska dock fånga in detta. Hur används statistik för att utveckla nya test för liknande produkter? • Tittar på RMA och föreslår förbättringar på produkten. Detta innebär ju förbättringar av test också. • Finns även en ”vanliga fel”-mapp. Ex. ”kabel nr 3 är lätt att klämma vid montering av display” Vad har ni för tillgång till statistik gällande vanliga fel i olika faser i livscykeln? • Ingen statistik finns lätt tillgänglig då mycket ligger i Excelark. • Man håller till viss del koll på statistiken manuellt (i huvudet) • t.ex. Power Grade har jättebra RMA-statistik. • Rådata i Excelark finns alltid till alla produkter. Har du haft tidigare arbeten där test har förekommit? En dedikerad testingenjör var anställd vilket fungerade mycket bra. Beläggning av denna kan i Data Respons fall diskuteras. Exempel på missar i test, vad är problematiskt..? • Problem med EMC kan vara vårt att hantera då det måste testas utanför huset → Det går inte att pröva sig fram till en lösning. • BBU är svår att hantera eftersom man inte vet om det är mjukvaran eller hårdvaran som det är fel på. • Testen som utvecklarna konstruerar är sällan optimerade för att göras snabbt i produktion. Man vill t.ex. göra alla test som kräver just ”den här skruvmejseln” i följd för att slippa byta verktyg hela tiden. Därför kan man vilja kasta om ordningen på testerna men det är inte alltid det är möjligt. Och även om det är möjligt så kan kunden bli misstänksam mot att man gör annorlunda mot vad som står i acceptansspecifikationen. • Försökte väga produkterna för att se att inget hade missats i monteringen. Men det är så olika toleranser på alla komponenter att det kunde skilja i storleksordningen 5 % mellan två likadana produkter. Det blev omöjligt att avgöra om något saknades i en produkt. • Transas-burkens strömförsörjning gick sönder i kyla. Det hade inte testats för kyla. Vem tar tillvara på den statistiken för att hitta systemfel? Tommy Hur skulle man kunna göra det administrativa arbetet kring test smidigare? • Övergripande lathund med riktlinjer, nu gör alla på sitt sätt. Finns redan sådana dokument så borde det tydliggöras. t.ex. en färdigifylld acceptanstestmall, där alla test som är vanliga hos DR står med, sedan tar man bort de test som är onödiga. • Vi skulle ha nytta av att smidigare kunna ta fram statistik ur rådata.. T.ex. välja produkt i ett program, välja tidsperiod, välja typ av fel (fel i funktionstest eller fel i fält) och visa alla olika fel.. • Bättre testspecifikationer skrivna av någon som kan skriva testspecifikationer. Det borde Utveckling göra. Testbeskrivning borde skrivas av Operations och Utveckling tillsammans. • Vore bra att slippa fylla i Excelblad manuellt. Det blir lätt fel t.ex. när man fyller i filnamn. • I Arena ska testdokument och tillhörande filer finnas.

11.2.3 Detalj Hur sker test idag? • Det vore fint om saker testades likadant varje gång. t.ex. USB, testa alltid Spänning, ström, datahastighet.. • Ofta kör man in en dongle i porten som skall testas och funkar det skriver man ”OK” i ett Excelark. Om det är någon annan som bett en göra ett test åt någon, så kan det bli ett ”OK” i ett e-mail. Det e-mailet blir mkt svårt att hitta senare. Hur vanliga är portarna? Hur testas de? • RS232 Vanlig! Används till debug Test: loopbackplugg, testa mot varandra om man har två på samma maskin • RS485 Vanlig i marina applikationer, Test: samma som ovan • Ethernet Mycket vanlig, även internt nätverk Test: ofta loopbackplugg, koppla upp mot en switch • CAN Vanligt i fordonsindustrin Test: RS232-adapter • I2C halv-vanlig • PCIe Vanlig Test: ofta till hårddiskar, startar produkten funkar bussen. • SPI Ofta intern kommunikation, testas inte av Operations • SATA Börjar bli vanligare • USB Väldigt vanlig • VGA Väldigt vanlig • DVI ganska vanlig • Audio Ovanlig • CF / Minneskortläsare Vanlig • Displayport Ovanlig, men kommer. • Parallellport Ibland • WLAN Vanlig • 3G Vanlig • GPS Vanlig • Strömförbrukning Intressant för utveckling, men inte nödvändigt för Operations. • Värme Inget de vanligen mäter Mikkos 5 favoriter: RS232, RS485, USB, VGA, Ethernet Hur testar ni interna komponenter • Bussar Funkar maskinen funkar bussen, sällan trasiga • Minnen Startar maskinen funkar minnet antagligen bra, sällan fel på. • CPU samma som ovan, massproduktion, svårt att testa.

Hur avgörs vilka test som skall göras? • Vi sitter och klurar, ibland tillsammans med Utvecklare ibland ensamma. • Utvecklingschefen har sista ordet. • Vi får se hur nästkommande överlämningar till Operations går.. • Vi tittar på testspec. som Utveckling ska ha gjort. • Ibland så blir det knasigt, om t.ex. ett test som utvecklarna vill ha gjort kräver utrustning för en miljon så får vi titta på om vi kan ersätta det testet med något annat. Utrustning? • Oscilloskop, kolla klocka på buss, tid mellan olika signaler • Multimeter • Kortslutningsverktyg som ger olika hög ton beroende på motstånd används. • PC kopplas upp mot produkten i vissa fall. multimeter • Oscilloskop • kablage • trycktest (läckagemätare) (för IP-klassning) • unik utrustning för vissa produkter • div. testpluggar, där systemet som testas är sitt eget testverktyg.. Hur skulle man kunna göra dessa hårda test bättre, enklare? • Återanvända kod och ha färdiga förfaranden och metoder för att testa. • Tommy tror att skaktest skulle vara bra för att plåga produkterna lite. Det är ofta fel beror på att en skruv inte hade låsbricka eller en kabel som var lite för löst itryckt. (Montera → skaka → funktionstesta) • I en perfekt värld får vi med en bootbar-image från Utveckling. Bootar på den, Den kör sitt testprogram automatiskt och säger om det är Pass eller Fail. • Ibland måste saker skruvas isär en del för att testas. Det vore då bra om Operations och Utveckling har samarbetat om arbetsflödet så att det blir ett bra flöde för de som sitter och skruvar med maskinerna. • Det vore bra om det var lättare att göra ändringar som underlättar för produktion och test. • Testtiden (och priset) är liten jämfört med slutpris på produkt. Fler klokt valda test skulle antagligen öka tillförlitlighet på produkterna, med färre kostsamma RMA som följd. • En punkt som borde vara med i utvecklingsfasen är att någon från Operations kommer och tittar på ritningar etc. För att tidigt identifiera problem som kan uppstå i produktion och vid test, och komma med förslag på förbättringar. • Det vore smidigt med en generell Linux/Windows-dist som man kan göra board bring-up med.. Vad upplevs som störst problem. Att det inte finns tillräckligt bra rutiner för test eller att det inte finns tillräckligt bra utrustning? Det skulle kunna vara så att: • en administrativ lösning ger mest i utveckling och • en testrigg ger mest i produktion. Idéer om vad Tommy vill få ut av vårt exjobb, vad behövs i testväg? • Ta fram en teststrategi. Vad är DRs teststrategi??? • Skapa låda med olika interface länkat till automatiserat program. Även loggningen skall vara automatisk (det har gjorts ett loggningsprogram nyss som måste undersökas) dessa loggar vore bra att smidigt kunna få ut statistik ut. • Kommunicera med flera olika interface. • Test skall gå fortare och smidigare, med mindre risk för slarvfel. • Ett testbord, där man har fast monterad testutrustning (lite som för Instant DVD) fast mer generell. Man ska kunna testa alla de vanligaste portarna utan att behöva leta fram testutrustning. • Det vi gör skall involvera och skapa ett tänkande mot design för test. Alltså att man redan i designfasen tänker på hur man kan underlätta för test. Idéer om vad Mikko vill få ut av vårt exjobb, se [Illustration 44]. Kunna belasta en port eller spänningsmatning med 5-12 V och max 120 W Illustration 44: Mikkos idé om testbox

11.3 Örebro

11.3.1 Generellt Vilka steg har ni i utvecklingen? • Kund kontaktar oss med en idé • antingen hjälper vi kund skapa kravspecifikation eller så får vi en färdig. • göra en Design Description, DD, hur löser vi kundens krav. • den skall spegla System Requirements, SR innehåller bara beskrivning av tekniska lösningar, följer ofta standarder När kommer test-fasen in i projekten, och hur funkar det? • I Bombardierprojektet kom det in väldigt tidigt pga säkerhetsklassning, vilket är bra. • Har kund legat på med krav och har eget stort tekniskt kunnande så brukar test få mer prioritet. • Har DR allt ansvar så blir det lätt nedprioriterat pga att test är något av det sista som görs, och är då pengar och tid slut så finns det bara test kvar att skära ned på. • Någon utvecklare utvecklar något, sedan får Jörgen Larsson i uppdrag att konstruera test som verifierar krav. Testansvarig skapar test efter vad som står i kravspecen. • Sluttest kallar de för Konstruktionsverifiering (Acceptanstest) Vad finns det för mallar för att göra test? Var finns dom? • Varje projekt är så olika att det inte finns någon mall för hur projekten drivs. • Dock vet utvecklarna av erfarenhet att är det ett Militärt projekt så gäller vissa krav och är det Medtech så gäller andra krav. • Vet man från början att det blir volymproduktion så tänker man på det från start. Det gäller komponentval, monteringssekvens, test i produktion osv. Hur återanvänds testmallar? • Man hittar inte relevant information i gamla projektmappar för det är för mycket mappar och informationen är inte sorterad på ett enhetligt sätt. • Det skulle antagligen vara för tidsödande att sätta sig in i och läsa igenom gamla rapporter för att sedan återanvända tester och projektmallar. • Jörgen Larsson (testingenjör) fick en testmall av John Andersson (f.d. utvecklingschef). Men det finns inga andra mallar vad de vet. Om du fick önska hur skulle test gå till? • När man startar projekt borde projektledare eller utvecklingschef säga vilka mallar och guider som finns och ska användas, så att alla inom projektet dokumenterar på samma sätt och enligt samma filnamn- och mappstruktur. Hur stor del av utvecklingstiden / produktionstiden består av test? • Konstruktionsverifiering / produktionstest tar mest tid. Speciellt om man upptäcker fel som kräver ändringar i konstruktionen. • Det tar en del tid att rodda ihop utrustning till miljötest. (klimat mm.) • en av HW-utvecklarna lägger ca 20 % av sin tid på test. Vad har ni för typ av system för att spara testinformation? • Excel på SVN • Traceabilitymatris används av vissa. Hur gör man för att hitta information om gamla test? • SVN Vad är problematiskt? Exempel på missar i test? • EMC, det är lite hokus pokus • Testhusen, (semco, nemco, osv) mäter vilka frekvenser och i vilken styrka det strålar från korten. De utsätter även korten för EM-strålning och kontrollerar då att funktioner inte störs. • Det vore bra med en lista på vilka testhus som gör vad. • Felsökning är bökigt. Det är svårt att veta om det är HW- eller SW-fel. Det brukar vara ungefär 50/50 med lite övervikt på SW-fel. • Intermittenta fel är svåra att identifiera, återskapa och lösa. • Man vet sällan exakt hur lång tid test kan ta. Hur skulle man kunna göra det administrativa arbetet kring test smidigare? • Någon säger att en tom mall fylld med rubriker vore bra • Andra säger att de är inte så sugna på en mall fylld till bredden, för ingen skulle hålla efter den och fylla på eller byta ut gamla tester, så den skulle ganska snabbt bli gammal. Detalj Hur sker test idag? • Ett smidigt sätt att prova portar är att bygla (ev. med motstånd för att simulera lång sladd) t.ex. RX och TX i en fysisk port och låta kortet prata med sig själv. Detta kräver SW som skickar och tar emot data med sig själv. SW måste ofta skräddarsys för olika typer av kort. • BBU (board bring-up) är ofta så olika från kort till kort och från utvecklare till utvecklare att det (ännu) inte finns någon mall för hur det ska gå till. Hur testar ni hårdvarunära drivrutiner? • Man vill testa HW med små snuttar SW, för att vara säker på precis vad SW gör och att det inte är fel i koden. Utrustning? • vid MTBF-kalkylering (Mean Time Between Failure) så krävs det t.ex. värmekamera för att veta hur varm varje komponent är under normal drift. • Oscilloskop, används i utvecklingsfasen och vid felsökning • Multimeter • NI-produkter används inte i nuläget, men i produktion/fabrik är det antagligen värt pengarna då det är flexibelt och har inbyggd rapporthantering • Optotestare (för fiber) Hur skulle man kunna göra dessa hårda test bättre, enklare? • Ofta går samma testmetoder som används i utvecklingen ut till produktion. Dessa tester är inte alltid anpassade till att göras snabbt eller parallellt. Det vore bra om man tänkte på hur testerna skall göras i produktion när man konstruerar tester i utvecklingsfasen. • De skulle vilja ha en generaliserad modell för hur test ska gå till, sedan får man göra avsteg från mallen om det krävs. • Någon borde äga testansvaret. Vad upplevs som störst problem. Att det inte finns tillräckligt bra rutiner för test eller att det inte finns tillräckligt bra utrustning? • En organisatorisk uppstramning gör nog mest nytta • Det löser antagligen fler knutar i utvecklingsprocessen om det tillkommer bättre ramverk, mallar och metoder för när och hur test skall genomföras. Skulle DR (och genom denne varje projekt) få en person som är ansvarig för att driva testfrågan skulle det antagligen ganska snabbt växa fram bra mallar och ramverk (då personen i fråga antagligen skulle ledsna på att göra om allt från början hela tiden. - Gustavs anmärkning) 11.4 Lund

11.4.1 Generellt Angående DR i Lund, (LundiNova) LundiNova köptes av DR i jan 2008 (3,5 år sedan i skrivande stund) och har väl egentligen fått fortsätta jobba på sitt sätt. De använder inte samma filsystem som Örebro och Kista. De är en egen liten klick som gör sina projekt och ofta har uppdrag från de anställdas tidigare arbetsgivare. Det är en väldigt kreativ atmosfär på kontoret och de flesta verkar mycket kompetenta och samspelta. De fikar (gärna ute i solen) varje dag kl 15 och då rätas ofta många frågetecken ut gällande både avancerade projekt och privata garagerenoveringar. Dessa korta, dagliga ”möten” gör nog större nytta än man tror. I Lund utvecklar de ofta kompakta speciallösningar Hur ser ett typiskt utvecklingsprojekt ut för dig? Exempel: • kund kommer med idé • skissa på vad som är möjligt att genomföra, ge kund olika val. • Kund avgör vad de vill göra. Vilka steg har utvecklingen? • projekt kommer in • snabbt göra en offert, brukar bli ganska bra • offert iväg • När ett projekt startas och det skall börjas konstrueras så samlas ofta ett gäng i konferensrummet. • vild brainstorm med en massa fantastiska lösningar och idéer • Sedan lugnar man ned det en aning och plockar ut de realistiska russinen ur kakan • rita upp någon grov ritning De har kört ett projekt nyligen i projektformen Scrum • Ordet Scrum är ett moment ur Rugby. Projektformen är stafettliknande med tydliga överlämningar mellan olika medlemmar i arbetsgruppen. • Projektet hackas upp i korta ”Sprints” med tydliga mål. Dagliga 5 min-möten (Daily Scrums) där alla medlemmar svarar kort på frågorna: 1. Vad har jag gjort sedan förra mötet? 2. Vad ska jag göra tills nästa möte? 3. Vilka hinder står i min väg? Scrum har funkat bra då korta dagliga möten hindrar att DRs och kundens uppfattning inte divergerar, vilket annars händer ganska fort. Vilka roller och individer finns i projekten? Får vi projektet så blir det lite så att ”DU” får bli projektledare för att du stod närmst. ”DU” blir utvecklare för du råkade vara ledig. Det är mycket Ad hoc. Projektens omfattning är väldigt varierande Allt från att: • någon ger oss ett kopplingsschema och vi CAD:ar ett PCB helt utan funktionsansvar till att: • ”Vi har en idé, hjälp oss med allt för att göra det verkligt” - från kund • ”vi har en idé som skulle passa er och era produkter”- till kund (lite ovanligt dock) • Koncepttester: någon vill ha en prototyp byggd för att kunna labba vidare på själv. Arbetsgången ser ofta ut som: • Concept study • Feasability study, (görbarhet) hinner vi med detta? kan vi göra det till ett vettigt pris? Normalfallet är ofta: • kund vet vad de vill ha • vi gör en prototyp • kund bestämmer om de vill gå vidare med full utveckling. När kommer test-fasen in i projekten? • Det brukar diskuteras i början, men hamnar ändå i bakvattnet • Test och verifiering är det som ofta tar mest tid. • Val av test sker på erfarenhet. • Robert Johansson gör ofta test som kunden inte har specificerat men som behövs och kunden brukar tycka att det är OK och betalar för det. ”När jag ritar schema så funderar jag på från början var jag behöver testpunkter. När man designar saker till telefonbranschen får det kanske bara vara ett fel på 5 miljoner produkter. Det kräver ICT (In Circuit Testing) och därmed väldigt noggrant gjord PCB.” • I första hand gör vi det som kunden kräver av oss • Har vi konstruktionsansvaret så tänker vi noga på test från början • Man designar med kraven hela tiden i bakhuvudet • I projektplanen skrivs test in av projektledaren men någon annan väljer vilka tester som skall göras. Testansvarig väljer tester baserade på kravspecen. • Smoke test, eller BBU kan ta flera veckor för stora kort. • Man planerar temperaturtest och preliminärt EMC-test • preliminära EMC-test görs hos ”Håkan” i hans strålskyddade garage. Han har bra verktyg för det men är inte certifierad. Det är dock väldigt smidigt och utvecklarna får göra testerna själva, så de ser med egna ögon vad som är fel. • När grejer ska certifieras så skickas det till Sony Ericsson, Delta eller SP bl.a. Vilka krav har kunden vanligtvis? • För ett möbelföretag finns det färdiga testmallar framtagna tillsammans med dem. • Sony Ericsson har mkt höga krav och detaljerade specifikationer. Detta gör det enkelt att konstruera test. • Temperaturtålighet • Strömförbrukning • EMC • EMI • Blyfritt lödtenn • Det är sällan kund kräver testdokument Hur återkopplas testen mot kraven ställda på produkten? Det är ett saftigt dokument med värden och tabeller, spänningsvärden hänvisat till datablad. Man måste sålla bort tester mellan utveckling och produktion. Det är inte helt trivialt att välja vilka test som skall göras i produktion. Vad finns det för mallar för att göra test? Var finns dom? • Vet ej om det finns mallar för ISO standarder. • Inga speciella dokument för det, kör på erfarenhet. • Mentorskap tillämpas inofficiellt. En ny utvecklare blir ”hållen i handen” första tiden av den som kan just det problemet bäst. Fikatiderna är viktiga som små dagliga möten. Där utbytes många idéer och man får reda på saker som man kanske inte hade kommit på att fråga om. Hur återanvänds testmallar? Mallar från Sony Ericsson återanvänds Kollar hur liknande funktioner har testats och verifierats i liknande projekt. Används egna mallar? Återanvänds i viss mån. Om man känner till ett liknande projekt / test så kan man ta det testdokumentet och byta ut en del rubriker. Det vore nog bra. Men de blir nog svåra att implementera p.g.a. att projekten är så olika. Ett dokument som skulle täcka in mycket skulle bli väldigt stort och komplicerat. Gällande mallar: Om man ofta hade hamnat i samma arkitektur så hade det varit värt att skriva egna standarddokument. Men eftersom kunden ofta redan ha önskemål om vilken plattform det skall vara så blir det sällan samma plattform. Skulle man dessutom stanna på samma plattform hela tiden så skulle man antagligen halka efter i utvecklingen. Riskbaserade test? • Ja, ibland. • Råschema → review → riskbaserade tester → tidskrävande men bra! • Det är dock antagligen inte lönsamt för enklare produkter (ex. Handsfree, mobilhögtalare) som det kan tolereras enstaka fel i. • Ibland testas det för mycket i produktion, det blir ofta för dyrt när det är stora volymer. • Ibland testas det dock för lite. • Berättas ett kul exempel med EM-strålande transformatorer på el-stolpar ute på landet och i vissa kvarter. De strålade mycket mer än de fick trots att de var ”testade” och certifierade och störde ut radioamatörer. Vad har ni för typ av system för att spara testinformation? • Versionshanterande dokumenthantering. Det finns nu, men ingen använder det. I tidigare jobb har ”Tortoise” använts. Nu används ren filarea. • Vi skriver testrapport och sparar på filserver, bakgrundsdata sparas ibland. • En del dokument sparas, men det är mest arbetsdokument, ofta rena .xls- och .txt-filer • Använder ej SVN, CAD-programmet hanterar dock det, så det borde egentligen göras. Hur gör man för att hitta information om gamla test? Tittar i dokumentstrukturen på filservern Projektmapp på server > kundens namn > projektnamn > SW, HW, Management > test Vad är problematiskt? Exempel på missar i test? • Det är mycket svårt om SW- och HW-utveckling är på olika ställen. • Om kund har dålig HW-kunskap så är det svårt att motivera testkostnaderna för kunden. • Vissa processorer kräver full JTAG för att göra boundary scan. • Svårt när det är miljöberoende fel. Svårt att återskapa och hitta orsak. Framförallt svårt att veta om felet är borta. • När test sker utomlands är det svårt med kommunikation mellan utvecklare och testare. Speciellt svårt är det att prata med kineser och indier. Det blir mycket ”please advise” och väldigt lite egna initiativ. När det väl är egna initiativ så blir det lätt fel ändå. t.ex. var testchefen en 18-årig indier som byggde en egen nålbädd för ICT som testade en GSM- modul åt gången till Ericsson. Detta trots att de hade skickat ned utrustning för ca 1 miljon kronor. • Klantfel → spegelvända kontakter t.ex. • Oftast funkar allt bra och så trimmar man in korten ännu lite bättre efter test. • Blackboxtest kräver ofta kringutrustning från kund • När det saknas info om komponenter/produkter från kund/leverantör så blir det svårt att testa så att grejerna håller vad de lovar. • Det är viktigt att kunna upprepa testen med samma resultat. Det är mycket svårt med RF. När och hur kommer nyttan in? Exempel på lyckade test? • Det är mycket lättare om man sitter i samma hus. Annars blir det lätt ”vi mot dem”-känsla. • Lättaste sättet att slippa strul är att vara riktigt noggrann! • Ett företag i Årjäng anlitades för att testa en produkt. Kravspec. skickades med, testföretaget var helt självgående och kom dessutom med förslag till förbättringar för att göra test på produkten lättare nästa gång. Mycket bra! • En gång skrev de SW som simulerade ännu ej färdig HW för att kunna testa tidigare. Den HW blev även försenad vilket gjorde att tiden det tog att skapa HW-simuleringen betalade sig med råge. Hur skulle man kunna göra det administrativa arbetet kring test smidigare? • Kanske en mall för hur man startar upp projekt. Men det blir nog svårt då projekten är så olika. • Ett testdokument med färdiga tester som man kan välja från kanske vore bra. • Verktyg för att skapa formaliserade flödesdiagram. Både för att visa kund, visa kollegor och för att kunna skapa sig en egen bild av vad man pysslar med. Både för projektflöde och dataflöde. Eller.. verktyg som Visio finns ju, det är snarare så att det borde göras oftare och noggrannare. • man får inte göra det administrativa för detaljerat. • En processmall vore bra • Designfas → • review av 2 st Senior Developers → • svara på frågor om t.ex. hur ska du testa detta krav? → • tollgates... • För ett tag sedan fanns det en projektgrupp bestående av olika personer på kontoret. Den jobbade med att förbättra interna processer. Men den är inte aktiv längre. • Det finns ett versionshanterande program som är Open source och heter ”Al Fresco” • I nuläget är det bara SW som använder SVN för versionshantering. • Det vore bra med verktyg för att versionshantera HW-filer. • Stöddokument för hur t.ex. veva igång ett oscilloskop skulle vara bra

11.4.2 Detalj Vanliga interface? • Nästan alltid RS232, serielänk • I2C är kanske vanligast intern på korten. • JTAG och serieport är vanligast för felsökning. • Första prototypen görs mycket noggrant. Den andra prototypen kan vara mindre noggrann. • Testar alltid bussar och stig- och falltid för signaler. • Ofta används multimeter, oscilloskop och strömprob, ibland datalogger • Man sitter ofta och jämför mot spec. eller datablad bara för att kolla så att allt stämmer. • Minne – Läs/skriv till alla sektorer på minnet. • AD-omvandlare – potentiometer, kolla så att den reagerar som den ska. • Testa ett protokoll åt gången • Det är inte alltid det räcker att det bara funkar. Det måste funka tillräckligt snabbt också. • (Mycket av projekten är Linuxbaserade) • Det är smart att skriva testkod i ett annat språk än det som används till testobjektet. Detta för att undvika samma misstag om man har missförstått något i själva språket. • ResQu: • EMC, EMI • Skrivit specifika testprogram som körs på HW • HW-specifika LEDs på PCB • special-SW för test i EMC-kammare • Mycket test för att få produkten certifierad iom. att det är räddningsutrustning. • skriver testfunktionalitet och HW-funktionalitet t.ex. självtest som stoppar apparaten om något är fel. (tex. Alla processorer inte kör samma SW-version) Hur kan test gå till? I kravspecen till ett projekt står det att produkten skall utföra självdiagnostik så då designas det för det. • Starta mikrokontrollern på interna klockan • låt den interna klockan kontrollera så att kristallerna går hyfsat rätt • kontrollera spänningar med mikrokontrollern • kontrollera kringutrustning m.h.a. SPI genom att fråga efter enhetens serienummer. Svarar enheten med ett serienummer så har den förstått frågan och funkar med största sannolikhet som den ska. Man går även en del på känsla. Jag har konstruerat och ritat kortet så man har bra koll på hur det ska funka. • Beställa korten med vissa komponenter ej monterade. Då är det lättare att prova bit för bit utan att blanda in t.ex. processor. • Prova lägga på spänningar här och var • Prova lägga på steglast • Mäta på pinnar till mikrokontroller • Kolla så att det beter sig normalt. • Strömsätt allt kanske 20-30 kontrollpunkter, pass/fail, till varje krav finns ett test. ex. Polvänd matning så får det max vara 0,6 V över komponent #123 • Beter det sig som i simuleringen • Inga klantfel, kolla en gång extra • Stoppa i mikrokontroller och ge till SW-folket • sedan följer en iterativ process mellan HW- och SW-utvecklare Vid PCB-design så är det alltid smart att dra ut pinnar till JTAG och boundary scan. Det är även smart att sätta ut mätpunkter på så många ställen man kommer åt. Det är lättare att ta bort en mätpunkt än att lägga till en. Om man inte behöver testa varenda komponent så kan man placera mätpunkterna lite strategiskt så att man kan testa små kluster för sig. Vad tror du om ett generellt test-OS som kan återanvändas med vissa modifikationer? Det kan vara svårt då det ofta är olika märken på CPU och olika arkitekturer. Linux används dock ganska mycket. Det är mellan 10 % och 20 % av projekten som kör Open Source. Sällan Windows. Det blir mer och mer Linux. Hur avgörs vilka test som skall göras? • sitter ofta med en SW-utvecklare och ber denne skriva korta enkla script som t.ex. tänder LEDs. • Annars testar man det som man tror är mest benäget att strula. • Olika från gång till gång • Ibland kommer man på tester som behöver göras efter 50-70% av projektets gång. • Ibland är det röriga specar, då får man fråga kunden igen. Utrustning? • Oscilloskop • Multimeter • Logikanalysator • Mikroskop • Funktionsgenerator • De upplever att de har den utrustning de behöver Hur skulle man kunna göra dessa hårda test bättre, enklare? • Men det är mycket efter eget huvud. Det finns inga dokument med guidelines eller processer hur test i alla fall borde gå till. Det är lite pinsamt. • Det kan vara lite svårt att få tag på rätt ISO-dokument. De är dyra. • Kanske en liten kurs med vanliga designmissar, vanliga haverier pga temperatur etc. • En box med variabel spänningsmatning, eller spänningsmatning i fasta steg • Även strömmätning (kanske även strömbegränsning) som kan visa medelström och peakström. • Lite quick 'n' dirty. Behöver man mäta noggrannare så används multimeter el. oscilloskop. • En box som står på bordet som ett verktyg för board bring-up. • Enkel USB-motionering • pulsgenerator • ett gäng logiska pinnar som man kan ställa högt/lågt med olika spänning. Vad upplevs som störst problem. Att det inte finns tillräckligt bra rutiner för test eller att det inte finns tillräckligt bra utrustning? Något som skulle vara mycket användbart vore en Testgrupp på DR som är dedikerad till test och som har hand om testbiten i varje projekt. De skulle även kunna konsulta som testkonsulter och hjälpa kund med test. En stor fördel med en testgrupp skulle vara att kunskap ackumuleras och passas omkring i den gruppen. Test kräver erfarenhet, en ny testingenjör skulle antagligen klättra fort i kunskap om denne ingick i en grupp med erfarna testingenjörer. Hur hanteras olika konfigurationer av samma produkt i testavseende? (Onboard Parameter Specification)? OPS 11.5 Västerås

11.5.1 Generellt Hur ser ett typiskt utvecklingsprojekt ut för dig? Ungefär varannan gång finns det bra requirements att följa när man testar. Test ligger sist så det blir lätt att det hoppas över när det blir bråttom. Vilka steg har utvecklingen? Arbetsgången kan se ut som: Issue → Tidsestimera → felsöka → testa själv → Annan utvecklare kodgranskar → merga in i kodspår → annan utvecklare testar igen → testa nya koden mot den gamla → ==> Det skall alltså ske 3 tester av 3 olika personer. Testansvarig? Det har funnits testansvariga i div. projekt men de har inte tillfört så mkt. När kommer test-fasen in i projekten? I form av simuleringar där funktionen skall nå vissa funktionella krav. Vad finns det för mallar för att göra test? Var finns dom? • Det finns inte så mycket mallar då projektägaren vill inte ta kostnaderna för att ta fram bra styrdokument för att oftast så funkar det helt OK ändå. • Man måste nog ha två likadana fall som går att jämföra och kunna visa för den som håller i pengarna att man sparar si och så mycket på att testa noggrannare. (Jfr. Alvedonproblemet, man vet inte om det var Alvedonen som fixade huvudvärken eller om den försvann av sig själv) Hur skulle man kunna göra det administrativa arbetet kring test smidigare? • Tydlig kravspec. • Testbara krav, för spårbarhet • På DR går det nog fortfarande att ändra på saker och skapa styrdokument som skulle kunna funka, för det saknas sådant just nu. • Återkoppling är viktigt. (På Ericson när David jobbade med en radiobasstation till LTE (4G) så är alla krav i kravspecifikationen till alla subfunktioner direkt testbara. Funktionsspecifikationen med testbara krav granskas av 5 pers innan den godkänns. Detta gör ju dock allting väldigt tungrott.)

11.5.2 Detalj Hur sker test idag? • ofta räcker SW-simuleringen • FW testas i funktionsperspektiv Hur skulle man kunna göra dessa hårda test bättre, enklare? • En bra testavdelning med duktiga testare. • Applikationsspecialister som kan produkterna utan och innan är viktigt. Ex. Sjuksköterskor som har jobbat med en viss typ av utrustning i flera år och vet vad som är bra/dåligt och hur det ska fungera. • Det skulle behövas ”fälttest i lab” då 3:e-partskomponenter kan vara så olika. Ex. Videosystem (kameror, skärmar mm.) i tåg. APPENDIX 2: ADVICE FOR DECISION Abstract This document describes the two different ways to carry out the Master Thesis “Construction of generic test environment for embedded systems”. Either the task given is solved from the top down or it is solved from the bottom up. The two ways and their probable results are described in short. At the end, our advice for decision will be given.

page 2 of 7 Content Abstract...... 2 Top-Down...... 4 Motivation...... 4 Approach...... 4 Limitations...... 4 Main goals...... 5 Bottom-Up...... 6 Motivation...... 6 Approach...... 6 Limitations...... 6 Main goals...... 6 Our advice...... 7

page 3 of 7 Top-Down The definition of ”Top-Down” is finding the cause to the problem at hand, and solving the underlying problem instead of the symptoms and/or finding a larger problem that can be solved with less effort. The method allows to find and solve the right problem. Effort will be put into pinpointing the problems and giving advice on how to solve them, rather than actually solving them.

Motivation • Gives a better general understanding of what needs to be improved regarding tests at Data Respons Sweden. • There are large uncertainties about how test is administrated and what routines there are concerning test. • Time consumed by recreating test-documents is fairly large and highly needed for actual development. • The developers use their own templates for test documentation. This makes quality assurance difficult. Documents become hard to handle by anyone but the creator. • Many of the developers and project managers don't know what templates are available. • For smaller projects, the sales person sometimes becomes project manager. This often makes documentation and tests for off-the-shelf systems inadequate. This causes problem it the responsible person is no longer available and there is no documents at hand . • Many of the developers from Örebro and Lund spontaneously say that better documents and guidelines regarding test probably would improve more than getting a technical solution for the hands on testing.

Approach • Find out what templates and project models are available at the different offices. • Analyse: • several projects and look at what templates have been used, if any. • what has gone wrong, could it have been avoided using better tests and templates for test. • what the most cost efficient ways are to make sure similar errors are not repeated. • Improve: • Analysed material according to importance.

Limitations • This method will not include quality-management such as routines for cross-checking. It will concern only the actual testing and documentation of that test. • Effort will be put into pinpointing the problems and giving advice on how to solve them, rather than actually solving them.

page 4 of 7 Main goals • Shortening the time spent on test by: • creating templates for Board bring-up logging , prototype test documentation, production test documentation. • making sure the tests that are made are relevant. Looking at what goes wrong with existing products (RMA, test logs) gives a picture of what is prone to fail. • developing or investing in new test equipment. The equipment will either be bought off-the-shelf or developed. • Ensuring the quality of tests made by: • providing instructions of how traceable and repeatable tests should be executed and documented. • supplying tools needed to do tests with relevant and quantifiable results. • Making sure data from tests is stored appropriately and is easy to access by: • analysing ways of implementing automatic documentation into new test equipment. • analysing ways of making test logs searchable and easy to pull statistics from.

page 5 of 7 Bottom-Up The definition “Bottom-Up” means dealing with the problem directly. Not really looking into what caused the problem in the first place or finding other greater problems. In our case we focus on developing a solution to known problems and limitations and minimising human interaction. Only support-documents and routines regarding the solution are produced.

Motivation • Data Respons benefits directly from the work of this approach as it focuses on finding solution rather than a deeper analysis of the problem. • Functional testing is highly manual. • Functional testing is time-consuming. • The method to carry out tests in production is highly individual to product. • The way of logging tests in production are highly manual. • No voltage, current or load tests are done in production. • Operations personal agree that highly monotone tasks may cause human error. • There are no methods to: • repeat tests many times. • do the exact same test again, years later.

Approach • Analysis what development and operations in Stockholm needs regarding tests and automation. • Analyse how tests can be done better. • Develop a method for automated logging of voltage and current of external ports and power- feed during functional test. • Develop a solution for test automation and logging.

Limitations • Only a few of the ports and protocols will be fully implemented within reasonable time. • The end user of the test box and the test-OS will primarily be Operations and Development departments in Stockholm/Kista.

page 6 of 7 Main goals • Able to quickly make physical layer tests • Speeding up repetitive tests of multiple units • Minimise errors caused by monotone tasks for the tester • Make tests traceable and repeatable • Automate documentation of tests

Our advice By our own judgement, a Bottom-Up solution would be the best approach. A light version of the Top-Down has already been done in the pre-analysis and the need for the test system could be verified along with the need to improve administrative work.

Data Respons benefits directly from the work of an automated and simplified test-platform. Constructing an instrument for testing is closer to mechatronics, which is both the authors' area of expertise in civil engineering. The solution can also be developed further to create automated installations of custom operating systems and other features that would support both operations and development.

There is a need for structuring the office in Kista but even more at the other offices in the country, regarding support-documents and routines to both developers and project managers. This how ever should be done on a strategical level with respect to the the different backgrounds, customers and cases. The work should be done by someone who has long experience with developing embedded systems.

page 7 of 7 APPENDIX 3: LIST OF TESTING ISSUES

Problem Details Test EMC / E MI hard to pinpoint Test EMC / E MI takes time Distinguish HW from SW error Is it HW or SW that is the problem Solve Intermittent errors Hard to find random errors Time to rig test Finding necessary hardware and software Test execut ion time Time to connect test equipment to DUT Test execut ion time Time to run the test on a DUT Test power circuits Tools used to load power circuitry are complex Test power circuits Manual method Test power circuits inflexible methods Test power circuits Heavy equipment log power electrics power test logging is manual Time to Boot DUT Time to boot test system Time to Boot DUT Time to physically mount bootable media in DUT Time to disassemble DUT Time to disassemble DUT Time for Test creation Time to build a new test Time for Test creation Verify that test is relevant Time for Test creation Decide what is PASS and what is FAIL Careless mistakes Highly manual routines make room for human error Careless mistakes Bored operator easily makes mistakes Poor Test coverage Difficult and time consuming to test odd devices Poor Test coverage devices may require special SW from customer Poor Test coverage test requires special environment Logging errors Manual logging is not always correct High Logging time Test logs are in some tests manual High Logging time some tests might not be necessary Poor Logging level Some tests does not have a low level log No BIOS settings log BIOS settings are not always logged Difficult to Run old tests Test software not available Difficult to Run old tests Test Hardware not available Poor DFT Complex methods to test DUT Test over heat ing temp tests visual inspection logging error Highly manual routines make room for human error Test statistics are hard to get test logging is done in excel Serial number miss match Wrong machine tested No Test verification No routines to verify the tests Emulate long cables during test There is poor specifications on how to load logic levels Test statistics are hard to get test logging is done in excel Test version cont rol Difficult to verify the latest test version APPENDIX 4: PRIORITY PLAN Tabell1 Task-based time plan

Priority: 1:Must have 2:Preferred Task, TEST-SOFTWARE Est. time, days 3:Bonus Progress write manual for test SW 4 1 30,00%

Research on what programming language is best suited for both Linux and Windows 2 1 30,00% 1 Execution framework test-code 10 1 Network con to log-server 5 3 interface test code with dongle 5 2 Write / find test-scripts (interfacing with hardware) 10 1 Getting to know DUT 2 1 round up passive test plugs/cables 1 1

Implement test specifications (with concern taken to test-SW) 1 1

Jtag test software (Magnus individual) 15 1 10,00%

Testing / Validation 7 1 notes to report 5 1

TOTAL TIME SOFTWARE 67 Task, TEST-HARDWARE time Priority Write manual for Test Dongle 4 2

Decide test cases for the ports chosen 2 2 Build casing for dongle 4 2 design PCB 6 2 manufacture PCB (boardbringup and test) 2 2 Soldering 3 2 Write program for micro controller 15 2 design cables for dongle (DB9 - ???) 2 2 manufacture cables for dongle (DB9 - ???) 3 2 Power tester (Gustavs individual) 15 1 5,00% Testing / Validation 7 2 notes to report 5 2

TOTAL TIME HARDWARE 68

TOTAL TIME 135 AVAILABLE TIME 140 TIME DIFF 5

Sida 1 APPENDIX 5: SOLUTIONS MATRIX k r o w e m a r f s

t n n d c o a a i u t c o d l u s

o

c r r c y i e p e r n t x t a o e e d r s

t t n e m c t

s u

e I e A Problem Explanation Solution Sub solution o l T V B E N better knowledge about guide documents on PCB 1 Test EMC / EMI hard to pinpoint EMC problems lay out 1 0 0 0 0 better knowledge about equipment to measure 2 Test EMC / EMI takes time EMC problems radiation 0 0 0 0 0 Distinguish hardware from Is it hardware or sof tware 3 sof tware error that is the problem Verif y hardware Boundary Scan test 0 0 3 0 1 Distinguish hardware from Is it hardware or sof tware 4 sof tware error that is the problem Verif y sof tware test on similar hardware 3 2 2 2 2 Run multiple repetitiv e Test automation, cy clic 5 Solv e Intermittent errors Hard to f ind random errors tests, burn-in tests 3 3 1 3 3 Finding necessary hardware 6 Time to rig test and sof tware centralized hardware Shelf with test name signs 3 3 1 3 3 Finding necessary hardware tests in centralised f older 7 Time to rig test and sof tware centralized sof tware on network 3 0 2 0 3 hav e the dev elopers create this at the same time as Time to connect test make a simple connection dev eloping the board being 8 Test execution time equipment to DUT board tested 0 0 0 0 0 Time to run the test on a Don't test memory if 9 Test execution time DUT run relev ant tests system boots 0 0 0 0 0 Time to run the test on a Smaller disk/media write 10 Test execution time DUT shorter cy cles on tests cy cle 1 0 0 0 1 Time to run the test on a send less data on loopback 11 Test execution time DUT shorter cy cles on tests ports 1 0 0 0 1 Time to run the test on a 12 Test execution time DUT Automate the process Less manual input 3 3 3 2 1 Tools used to load power Use easy to use dy namic Get tester and document 13 Test power circuits circuitry are complex load tester use 3 3 0 3 1 Get automated load/power 14 Test power circuits Manual method Automate the process tester 3 3 0 3 1 stop using only power 15 Test power circuits inf lexible methods use digital technology resistors 2 2 0 3 1 16 Test power circuits Heavy equipment Use light weight tester make a light weight tester 2 2 0 3 0 17 Test power circuits Heavy equipment Use light weight tester Buy a light weight tester 2 2 0 3 3 Get automated logging 18 log power electrics power test logging is manual Automate the process load/power tester 3 2 0 3 3 Remov e unnecessary 19 Time to Boot DUT Time to boot test sy stem Strip OS processes 0 0 0 0 0 Boot multiple sy stems in 20 Time to Boot DUT Time to boot test sy stem parallel create multiple boot media 3 0 0 0 0 21 Time to Boot DUT Time to boot test sy stem Boot small test OS Create stripped test OS 0 0 0 0 0 Time to phy sically mount 22 Time to Boot DUT bootable media in DUT Boot external media Boot USB 3 0 0 0 0 Time to phy sically mount 23 Time to Boot DUT bootable media in DUT Boot external media PXE boot Jump-start serv er 0 0 0 0 0 Time to phy sically mount reduce the work to deliv er disassembled 24 Time to Boot DUT bootable media in DUT disassemble system to DR 0 0 0 0 0 reduce the work to thorough mechanical 25 Time to disassemble DUT Time to disassemble DUT disassemble solution 0 0 0 0 0 tests in centralised f older 26 Time f or Test creation Time to build a new test Re use old tests on network 3 0 0 0 2 look at similar prev ious 27 Time f or Test creation Verif y that test is relev ant products 1 0 0 0 1 Decide what is PASS and 28 Time f or Test creation what is FAIL emulate errors unplug loopback during test 3 2 2 1 3 Decide what is PASS and 29 Time f or Test creation what is FAIL emulate errors disconnect hardware 3 2 2 1 3 k r o w e m a r f s

t n n d c o a a i u t c o d l u s

o

c r c r i y e p e r n

t x a t o e e d r s

t t n e m c t

u s

e I Problem Explanation Solution Sub solution e A o l T V B E N Highly manual routines 30 Careless mistakes make room for human error Automate the process 3 3 3 3 3 Bored operator easily dif f erent working tasks more than one person 31 Careless mistakes makes mistakes during the day knows the task 3 2 1 2 2 Dif f icult and time make sure System consuming to test odd simplify testing of odd integration can re-use the 32 Poor Test cov erage dev ices dev ices test run by the developers 3 3 1 3 1 dev ices may require special 33 Poor Test cov erage SW f rom customer use special SW request SW f rom customer 0 0 0 0 0 dev ices may require special request special SW 34 Poor Test cov erage SW f rom customer write own special SW specifications 0 0 0 0 0 buy climate chamber, EMI test requires special chamber or test equipment 35 Poor Test cov erage env ironment create special environment needed 0 0 0 0 1 test requires special Let external part perf orm make a deal with a test 36 Poor Test cov erage env ironment test house 2 0 0 1 2 make sure the test software Manual logging is not logs ev ery thing 37 Logging errors alway s correct automate logging automatically 3 3 2 3 3 make sure the test software Test logs are in some tests logs ev ery thing 38 High Logging time manual automate logging automatically 3 3 2 3 3 thorough examination of some tests might not be discard tests, that hav e no system specif ications 39 High Logging time necessary purpose together with customer 0 0 0 0 0 add to guidelines that low Some tests does not have lev el logging alway s should 40 Poor Logging lev el a low lev el log if critical, change test be sav ed. 3 3 3 3 3 BIOS settings are not get program to log BIOS 41 No BIOS settings log alway s logged log BIOS settings settings 3 0 0 0 0 42 Difficult to Run old tests Test software not available save test software save test bundle in Arena 3 0 0 0 2 make a box or similar container, label it and store 43 Difficult to Run old tests Test Hardware not available save test hardware it 0 0 0 0 0 giv e dev elopers a course in Complex methods to test Make sure developers DFT, operations should be 44 Poor DFT DUT Design for test inv olv ed in dev elopment 2 2 3 2 3 Load circuits that produce 45 Test ov er heating temp tests heat and measure temp automated temp logging 3 3 0 3 2 visual inspection logging Highly manual routines show a picture of how it make test program show 46 error make room for human error should look picture 3 0 0 0 2 Test statistics are hard to make test program log to 47 get test logging is done in excel save log to database database 3 0 0 0 3 use barcode reader to enter make test program read the 48 Serial number miss match Wrong machine tested serial number barcode reader 3 0 0 0 3 Test statistics are hard to make test program log to 49 get test logging is done in excel save log to database database 3 0 0 0 3 No routines to verif y the create routines to verify simulate errors to test the 50 No Test verif ication tests tests test 3 1 2 1 2 Dif f icult to verif y the latest create routines to verify simulate errors to test the 51 Test v ersion control test version tests test 3 1 2 1 2 Emulate long cables during There is poor specifications put resistors in loopback 52 test on how to load logic lev els plugs measure voltages in signals 3 3 0 0 3 Test statistics are hard to make test program log to 53 get test logging is done in excel save log to database database 3 0 0 0 3 APPENDIX 6: RMA STATISTICS

RMA Analysis Error description pcs w. error Model Transas RS6 power management 68 date from 10-09-07 HDD 18 date to 11-09-07 ComExpress 9 MXM 3 PS Dead 3 RAM 1 CPU heat 1 corrupt file 1 eco 1 MAC adress 1 DVD ROM 1 TOTAL 107 Model DG3 date from 10-09-07 c197 13 date to 11-09-07 com express 13 ”48 Volt” 5 c223 4 t15 4 u22 2 cmos battery 2 t7 2 t8 2 u36 2 t14 2 ETH 2 fan 1 HDD 1 u21 1 u36 1 c209 1 power cables 1 TOTAL 59 Model PowerGrade 3D date from 10-09-07 touch calibration, dark display a lot date to 11-09-07 inverter a lot cold start patch a lot display and touch 45 buttons 7 magnet missing 6 assembly error 5 etx board 2 A-cover 2 hole on rubber list 2 bluetooth 1 ir foil 1 pad light cable 1 HARDWARE LOCK ERROR 1 c120 1 gore valve 1 USB / SD conn 1 boot error 1 c223 1 TOTAL 78 Model GeoROG date from 09-09-07 USB port bad mounting 11 date to 11-09-07 scrathces on logo 10 inverter error/burned 9 pixel error 6 Short circuit HW-lock 6 loose screws 5 ETX module faulty 5 bad LAN due to wrong cables 5 loose cables 4 Power fault 3 soldering error patch 13,2 V 2 FPGA code corrupt 2 bad soldering 2 RAM fault 2 R1 burned 1 D2 burned 1 replaced carrier board 1 bad EPDM foam tape 1 USB glitch 1 interrupt error CF card 1 bad rs232 controller 1 OSC1 touch controller bad 1 TOTAL 80 APPENDIX 7: RISK ASSESMENT

Total Main risk Cause Probability Severity risk Solution (risk ≥ 27) Make sure other people have a Dynamic solution seems look at the solution from time to impossible Using a hopeless method 3 9 27 time. To advanced 3 3 9 We get thrown out on the street Bad behaviour 1 9 9 Make sure delimitations are The thesis never gets finished Too heavy workload 9 3 27 narrow enough The tests we develop are not important Unclear specification 3 9 27 Verify specification with end user Data Respons can't use the Verify specification with end user product It does not do the right thing 3 9 27 and supervisor Misunderstood how to write a Periodically have Sagar look at The report is a mess report 3 9 27 the report. All files gets deleted / scrambled Dropbox makes an error 1 3 3 Data Respons servers crash 3 3 9 The backup is unusable There is none 3 9 27 Setup an automatic daily backup It is to old 3 9 27 onlySetup use an Windows automatic and daily Open backup It is not readable /usable 3 9 27 Office Data Respons can´t proceed with the master thesis Not enough resources 1 9 9 Bought by other company 1 3 3 Bankrupt 1 9 9 Prototype disappears Stolen 1 9 9 Build it from components that are replaceable and not too expensive. Have a small Broken (zapped by ourselves) 9 9 81 replacement plan for all parts. Use V-model and check progress Solution too complex Limitations not narrow enough 3 9 27 against planned work every day. Documentation SW can't be integrated Non compatible data types 3 3 9 Documentation SW doesn't work Bugs 9 3 27 fix bugs Check program early in pre study, the code is a total mess 3 9 27 if it doesn't work, find alternative Solution too expensive Too high demands 3 3 9 Be careful when planning Bad choice of components 3 9 27 purchase Wrong task solved 3 9 27 Verify specification with end user Plan carefully, have supervisors We run out of time Bad planning 9 9 81 look at planning Ask supervisor about lead times. Always count on breaking Bad knowledge about lead times 9 9 81 something important at worst time. Essential components are not Consider how fancy stuff we really available Too exotic components 3 9 27 need Have Supervisors approve pre Requirements change Sloppy pre study 3 9 27 study before start of construction Have Supervisors approve intended solution before start of We are not competent enough Solution too complex 3 9 27 construction Essential tools are not available Too exotic tools needed 3 3 9 Essential tools are too expensive Too exotic tools needed 3 3 9 Stefan or Jeanette leaves Data Respons Fired 1 9 9 Tired 1 9 9 Hired 3 3 9 APPENDIX 8: XJTAG PRODUCT EVALUATION XJTAG

Product test and evaluation

Magnus Dormvik 1 XJTAG EVALUATION

Table of content

1 XJTAG EVALUATION...... 2 1.1 Purpose...... 3 1.2 Product information...... 3 1.3 DUT...... 3 1.4 Overview...... 3 1.4.1 The evaluation kit...... 4 1.4.2 Hardware ...... 4 1.5 Getting started...... 5 1.5.1 Software installation...... 5 1.5.2 Hardware configuration...... 5 1.6 XJAnalyser evaluation...... 7 1.6.1 Setup...... 7 1.6.2 Building the JTAG chain...... 7 1.7 XJDeveloper evaluation...... 8 1.7.1 Initialising the JTAG chain...... 8 1.7.2 The interface...... 9 1.7.3 Specifying Power nets...... 10 1.7.4 Specifying components...... 11 2 RESULTS ...... 16 3 DISCUSSION AND CONCLUSIONS...... 18 3.1 Discussion...... 18 3.2 Conclusions...... 18 4 Appendix A M25P64 Serial Flash Memory...... 19 1.1 Purpose The purpose of this evaluation is to tests how difficult it is to implement a boundary-scan test using XJTag suite, how well the functionality of the different boundary-scan standards are implemented, what features are available, how they are used and what features are available compared to the many testing features from the available Boundary-scan standards.

1.2 Product information

Company: XJTAG Price: Full version $20.000 Only Developer and Xjlink2 $10.000 Version: Hardware: XJLink2 Software revision XJTag2.5 Hardware compatibility: Only work with XJTAG specific hardware testers such as Xjlink2. 1.3 DUT

The device under test is a network controller board containing one Xylinx Spartan6 FPGA and eight network controllers which all are connected to the boundary-scan chain. The board layout is not official information and only the components relevant to the test will be presented in this report. 1.4 Overview

XJTAG provide a complete setup for testing PCBs. It includes tools to run manual tests(XJAnalyser), create custom automated tests(XJDeveloper), create test packages for specific systems (XJRunner). To quickly get started it also includes well made documentation and tutorials. See Illustration 1: XJTAG product package.

Illustration 1: XJTAG product package 1.4.1 The evaluation kit

The product was sent as an evaluation-kit to DataRespons AB. The kit included a USB to JTAG adaptor(XJLink2), a CD with software and drivers and a demo-board used in the tutorials to get familiar with the software.

1.4.2 Hardware

XJLink2 is a USB to JTAG connector used to connect your PC to the DUT. It is powered from the USB connector and has a consumable 20pin connector in the front to interface the JTAG chain [Illustration 2: XJLink 20pin connector initial setup]. It can run up to four JTAG chains and it also includes the extra features to provide 5V, 100mA power to the DUT and to measure voltage-levels on the non JTAG pins.

Illustration 2: XJLink 20pin connector initial setup

All pins on the 20-pin connector are consumable except pin 10 and 20 which are hard mapped to GND. The other ports provide logic input, VCC or GND with maximal current 100mA. Even numbered ports from 2 to 8 and 12 to 18 provide slow mode testing and programming up to 50 MHz and odd numbers from 1 to 9 and 11 to 19 provide fast mode testing and programming up to 160 MHz . The following states can be set:

TDI Test data in onTAP

TDO Test data out on TAP

TCK Test klock

TMS Test mode selector

PIO General I/O pin frequency or voltage measurement

LOW GND

HIGH (3.3V,2.5V,1.8V,1.5V,1.2V) 1.5 Software installation

The software installs without any problems and the demo-license is installed in the XJLink- Manager. The license-key is connected to the XJLink2 so it has to be plugged in during license installation and while the product is used. The demo-license was provided by XJTAG Sales representative in Sweden, Joakim Lang. 1.6 Hardware configuration

The configurable 20-pin TAP connector of the XJLink2 adaptor has to be configured manually to connect to the 14-pin 2mm TAP connector on the PS700 FPGA board. The connector on the board is designed to be used with Xilinx TAP controller[Illustration 3: 14-pin PS-700 FPGA board TAP connector]. Since the Xilinx tap controller uses a flat cable, the pin-out also represents the connector on the PS-700 FPGA board.

Illustration 3: 14-pin PS-700 FPGA board TAP connector A converter cable was created. The 14-pin 2.00mm connector was mapped to the XJLink2 20-pin 2.50mm connector in such way that signal was pared with ground on the flat ribbon cable. Adjustments was made to adapt to the pin 10 connected hard to GND. The XJAnalyser cable configuration tool was used to map the converter cable to the configurable 20-pin connector of the XJLink2 adapter[Illustration 4: XJTAG TAP GUI configurator].

Illustration 4: XJTAG TAP GUI configurator 1.7 XJAnalyser evaluation

XJAnalyser is a manual test-program used to find defects on the PCB. The program can import BSDL files for the IC's and show the physical layout of the devices. By selecting individual pins and changing the states and observing the result, the functionality of the PCB can be verified.

1.7.1 Setup

To connect to the PS-700 FPGA board the configuration from the previous chapter Hardware configuration was used. After several tests and verifications of the created cable and the pin configuration, the JTAG chain was identified. The scew had to be set to low on TDI, TCK and TSM in order to get proper signal on the JTAG chain. This was discussed with XJTAG Product Manager Joakim Lang who had never changed these settings before. There are some inconsistencies regarding termination of the JTAG chain when comparing the specifications from Xilinx JTAG standard termination and XJTAG DFT guidelines and the actual board schematics. The developers on Datarespons were informed about this issue. The TAP configuration was saved as ps700analyser_test.xjpm and was also later used for the XJDeveloper test program.

1.7.2 Building the JTAG chain.

At first the nine devices on the JTAG chain was identified as unknown, only the device IDs could be displayed. To run any tests on the JTAG /Boundary-scan devices on the chain, the BSDL files need to be added by paring the devices IDs and manufacturing IDs to the appropriate BSDL files provided by the IC manufactures. When the BSDL files were added the grey box of the undefined device was changed to a graphical representation of the device/IC [ Illustration 5: Initial setup with analyser on PS700]. The pins of the devices have color mapping. The black pins are either connected to VCC, GND or the JTAG chain and can not be changed, Red pins represent logic one and blue logic zero. The yellow pins are fluctuating and could be verified by checking the FPGA clock pin which indeed was yellow. By right clicking a red or blue pin, the state of the pin can be changed to high, low or pull up. If the pin is connected to another JTAG device the change also appears there, which can be seen as a similar change in colour of that device pin. This is much similar to the TopJTAG program evaluation, where the testing technique is described more in detail.

Illustration 5: Initial setup with analyser on PS700 1.8 XJDeveloper evaluation

XJDeveloper is a program that can automate the testing of PBCs with one ore more JTAG devices mounted. The program imports nets-lists and BOM-lists. The program also uses test-scripts to test devices which are not JTAG enabled, by accessing the through JTAG enabled device. XJDeveloper also includes a test-coverages analyser which can be used while designing the PCB to evaluate the testability of the card. The difference between the less expensive solutions and the XJTAG suite is mainly the XJDeveloper. The ability to import nets-lists is the key feature which opens up the possibility to create automated tests patterns. This is done by calculating how a pin change on a specific pin affects the other pins on the PBC according to the net list and comparing this with the result of the actual state change made through the BS test.

1.8.1 Initialising the JTAG chain

The pinmap from XJAnalyser ps700analyser_test.xjpm was used to connect to the JTAG chain. The approach is not as simple as with the XJAnalyser. XJDeveloper starts with the net-list and tries to interpret the full JTAG chain on the PBC before the ability is given to try the chain, compared to the XJAnalyser which simply runs the chain and catches what is on it. In order to complete the JTAG chain connection all devices from TDI to TDO need to be specified. In our case the chain consists of nine JTAG devices two voltage converters form (3.3V to 2.5V and back again) and one connector(the TAP). The converters can be specified as jumpers since they only forward the leveled signal of the JTAG chain. The connector needs to be specified as the TAP with the external TDO and TDI specified as the external connection to the BS chain. To complete the chain the BSDL needs to be added to the JTAG / BS devices. This creates the link between the TDI and TDO on the device. When this is done the JTAG chain be tested. See Illustration 6: XJDeveloper JTAG chain complete

Illustration 6: XJDeveloper JTAG chain complete 1.8.2 The interface

The XJTAG XJDeveloper interface is easily understandable and quite intuitive to use. It is built much like a classic web-page where the menu on the left changes the content of the main window. At the bottom a progress bar is displayed together with a short explanation of the next step to complete the test setup. See Illustration 7: JTAG view

Illustration 7: JTAG view 1.8.2.1 The working progress When setting up the test the work starts at the top of the menu. with importing net-lists and BOM- lists. The next step is to configure the physical connection to the board to map the TMS,TDO,TDI and TCK of the XJLink2 programmer. See Hardware configuration. When this is done the power nets are specified. After this is done the JTAG chain is initiated and devices are added.

1.8.3 Specifying Power nets In order to prevent any short circuit while testing the board, all power nets need to be specified to prevent testing these nets by trying to pull them low or high. The ability to import BOM lists enables semi-automated classification of power nets. All nets with names containing VCC or GND are automatically added to the categories Suggested Ground Nets and Suggested Power Nets and can be dragged to the Ground or Power window to the right. See Illustration 8: Power Ground view .

Illustration 8: Power Ground view 1.8.4 Specifying components

The next step is to specify the other components on the PCB. The ability to import BOM lists enables semi-automated classification of passive devices such as series resistors and pull up resistors by putting the devices is suggested device categories. See Illustration 9 “XJ Developer interface Categorice Devices“ This helps defining the peripheral devices on the PBC. In order to run any test on a specific net, all devices connected to that net needs to be specified. This is a safety precaution to prevent any harm to the PCB during tests. As new devices are specified new nets can be tested. The progress of improving the testability can be viewed in the test-coverage area of the program.

Illustration 9: XJ Developer interface Categorice Devices 1.8.4.1 Specifying passive devices

Specifying pull up and series resistors gives a test-coverage of 23.9%. The test coverage is based on the total amount of nets on the PCB, including the power nets. The power nets cannot be tested specificly with Boundary-Scan logic but since the JTAG chain it self need the 2.5V and the 3.3V for the buffers to work, they can be seen as tested. If the power nets are excluded from the statistics the test coverage is increased to 50.6% see Illustration 10: Test coverage Summary. The interference capacitors closely mounted to the FPGA I/O pins also need to be specified.

How the nets are tested depends on what is possible to test, which is a result of how they are connected to power nets and ground. The possible tests are stuck at high, stuck at low, short circuit, open circuit and functional test. The stuck at high error indicate that a net that should be changeable is stuck at a high level, stuck at low indicates the opposite. Short circuit error indicates that the net has a connection to some other net for which it could not be connected, open circuit error indicates that the net is not connection to some other net for which it could be connected. The functional test indicates an error accrued when changing the state of a net and the change did not take place on a other JTAG enabled deveice connected to the net. see Illustration 10: Test coverage Tested pins coverage.

Illustration 10: Test coverage

As mentioned XJTAG does not test nets that have any connected component unspecified. This also includes connectors. Specifying a connector as a connector enables XJTAG to set any level on the IC pin connected to the connector thus improving test coverage. Most connectors can be seen as an endpoint if the connector is not connected to any device. If a external device is connected, the net- list of that device needs to be imported for proper testing. Leaving it unspecified excludes the external device, leaving it and the nets connected to it untested. Some of the connectors of the PS700 can be seen in Tabell 1: Selected connectors on PS700. Netlist Device ID component Description Test script

J600A J8064D628ANL Ethernet connector Spec. as connector

J600B J8064D628ANL Ethernet connector Spec. as connector

J1000 PCIEXPRESS PCI express connector Spec. as connector Tabell 1: Selected connectors on PS700

Some connectors contains some sort of internal components such as the RJ-45 connector. As can be seen in the Illustration 10: Test coverage it contains inductors and LEDs. To test this connector some configuration is needed.

Illustration 11: J8064D628ANL blueprint

The BS test runs at low speed compared to the standard RJ45 100mbit and the inductors in the connector can be seen as short-circuited . The connector also has two LEDs integrated in the connector. They can also be tested by adding the proper connections between the pins of the connector to make XJTAG aware of there testability. To do this, XJTAG provide a tool to configure how custom passive devices are configured. By adding the internal connections according to Illustration 2: XJLink 20pin connector initial setup the LEDs and the inductors can now be tested since XJDeveloper is made aware of the connections between pins of the connection. Illustration 12: Passive device tool

1.8.4.2 Specifying test devices

To further improve the test-coverage, logical components such as logical gates, EEPROM, SDRAM can be tested. XJTAG provide scripts created in their own scripting language (.xje format) to test logical components by accessing them via a boundary-scan enabled device. Several scripts are provided for registered users on the XJTAG website. According to the PBC net schematic there are some components that can be accessed and tested through the FPGA. Tabell 2: Testable devices . Depending on how they are connected and what test-scripts are provided, the device can be tested.

Netlist Device ID component Description Test script

U400 M25P64VMF6 Serial Flash 64mbit Modifyed M25P20

U402 M24c32-WMN6 EEPROM - Tabell 2: Testable devices

The Serial flash is connected to the FPGA via SPI. To test the device a suitable test-script is needed. It is possible to write the script from scratch but a M25P20 script was found on the XJTAG website. In order to make it work the script needs some modification. The script uses an old method of identifying the component by addressing the memory RES(Read Electronic Signature) bytes. This is done by writing the RES memory address to the device and then comparing the respond with a expected identifier. By re-mapping the pins in the test script and changing expected RES value from 0x11 M25P20 to 0x16 (M25P64) according to the M25P64 documentation[www.micron.com], the M25P20 test-script works. The component can now be verified as correct and working.

The M24c32-WMN6 I2C does also have a suitable script provided on the XJTAG website but for some reason it will not work properly. Since the main goal of this exercise was to evaluate the functionality of the program, not optimizing the test-coverage, no more effort was made to make it work. 1.8.4.3 Interactive tests

Another method of testing devices is by creating interactive tests for visual inspection or human interaction. The method uses the same type of test scripts as for the test devices but commands for external input is added. The tests can include pushing a button or visually verifying a illuminated LED on the PCB. On the PS700 FPGA board there are no buttons but there are 20 LED´s that can be tested. XJTAG also provide test scripts for these type of interactive tests. By specifying the LED ´s as test devices and adding the LED.xje file to the devices, they become part of the interactive test. See Illustration 13: Run mode Interactive tests LED When running the test one LED at a time is illuminated, pressing space on the keyboard confirms the functionality and the test moves on to the next LED.

Illustration 13: Run mode Interactive tests LED 2 RESULTS

The XJTAG product suite provides a easy way to make automated boundary-scan tests that can be used in both development, production and after market testing of PCBs. When most of the passive and test devices were specified and the power nets were excluded out of the statistics the total test coverage was 73.5%. This can be considered fairly high test coverage since the PS700 board was not designed for boundary-scan testing.

Illustration 14: DFT view final tests coverage Running tests is very fast. When disabling the interactive LED tests a Boundary-scan test covering 1164 nets, will complete in less than two seconds.

Illustration 15: Run view time to test 3 DISCUSSION AND CONCLUSIONS

3.1 Discussion

The ability to import Net-lists and BOM lists makes it possible to automate the testing of embedded systems. Creating a test in the XJDeveloper can be done before the PBC is physically created, so the test coverage analyser can be used in the design phase to design the PBC for boundary-scan test. The ability to test power nets is poor on the PS700 FPGA board. This could have been solved in the PBC design process if the different voltage nets had been connected to the TAP. By doing this the XJJLink2 built in AD converter could have been used in the test to measure the levels. Also no loop-back plugs were created during the test. This would increse the test coverage further. The product is prised in such way that the PCB providers to company buying the suite get a free XJTAG test suite including (hardware and software) to do factory testing. This is very convenient since the PCB can be tested before it is mounted in a case where the TAP is not accessible. Aslo the shipping and handling costs of defect PCBs is reduced.

3.2 Conclusions Using the XJTAG product suite is fairly fast and simple. Creating a system tests takes one or two days and becomes a lot faster when the person involved is familiar with the PCB and has experience with the XJTAG Developer. Only the device test standard IEEE 1149.1 is used. The IEEE 1581 is not used since there are no devices on the board that supports it. IEEE 1581 testing can be easily confused with the SPI flash tests which uses XJTAGs test script. It appears a lot of testing can be done with only IEEE 1149.1 compatible IC´s. The main issue is how well the test-program can produce intelligent tests out of the net list, BOM lists and devices available. How easily this is done is also a key feature. The device support for non JTAG compatible devices(test-scripts) was good but there are many devices on the market and eventually test-scripts have to be created or modified to cover them all. How ever. the approach with tests-scripts is good since it seems possible to tests almost any device by creating a smart custom test-script. In this case and probably in many other cases, the result of the tests is not as dependent on which standards above IEEE1149.1 the test suite supports, but how the IEEE1149.1 standard is used. APPENDIX 9: TOPJTAG PRODUCT EVALUATION TopJTAG

Product test and evaluation

Magnus Dormvik Table of content 1 TopJTAG evaluation...... 3 1.1 Purpose...... 3 1.2 Product information...... 3 2 DUT...... 3 2.1 Product functionality...... 3 2.2 Test evaluation software...... 3 2.2.1 Hardware setup...... 4 2.2.2 Software setup...... 5 2.2.3 Running tests...... 6 3 Result ...... 7 4 Conclusions...... 7 1 TopJTAG evaluation

1.1 Purpose The purpose of this evaluation is to tests how difficult it is to implement a boundary-scan test using TopJTAG, how well the functionality of the different boundary-scan standards are implemented, what features are available, how they are used and what the features are available compared to the many testing features from the available boundary-scan standards.

1.2 Product information Company TopJTAG Probe Price: 100 USD Version: TopJTAG probe 1.7.4 System requiremets: Any (32- or 64-bit) Windows XP, 2003, Vista or Windows 7. Hardware requirements: The TopJTAG software supports eleven different hardware such as the Alteria USBBlaster used in this test.

2 DUT The device under test is a network controller board containing one Xylinx Spartan6 FPGA and eight network controllers which all are connected to the boundary-scan chain. The board layout is not official information and only the components relevant to the test will be presented in this report.

2.1 Product functionality The TopJTAG probe is a boundary-scan software that implements the functionality of the IEEE1149.1 standard. The product consists of a single software binary that can be used to manually do boundary-scan tests. This, in short, means setting states and reading states on pins. The boundary-scan compatible devices are presented in a graphical view with all pins colour-coded according to pin state. TopJTAG probe also enables the user to import pin-names of the JTAG compatible IC, which can be seen as a light version of BOM list import since names and values of the pins often are specified in these files.

2.2 Test evaluation software TopJTAG was downloaded from the internet from www.topjtag.com and installed with the 20-day test-period included with the binary. 2.2.1 Hardware setup Data Respons provided a Alteria USB-Blaster used in earlier projects to program FPGAs and other devices. A converter cable was created created to link the 10 pin Alteria 2.25mm connector to the 14 pin PS700 2.00 mm TAP connector. This was done according to the USB-Blaster specification[ALTE1]. The pin mapping is presented in Table 1: USB-Blaster to PS700 TAP pin map

Type Alteria USB Blaster PS700 TAP TCK 1 6 GND 2 3 TDO 3 8 VCC (target) 4 2 TMS 5 4 GND 6 5 nc 7 - GND 8 7 TDI 9 10 GND 10 9 nc - 1,11,12,13,14 Table 1: USB-Blaster to PS700 TAP pin map

The result can be seen in the upper right corner of Illustration 1 "Alteria USB Blaster to PS700 with TAP converter cable ". Compared to the XJTAG XJLinkII the USB Blaster does not have a customisable TAP connector. This missing feature does not seem to be a problem since the cabel in many cases need to be soldered manually to fit the target DUT TAP.

Illustration 1: Alteria USB Blaster to PS700 with TAP converter cable 2.2.2 Software setup The TopJTAG software was very easy and quick to install with only the standard questions asked. The cable was connected to the PS700 FPGA board and the TOPJtag program was started. The program was configured to connect through the Alteria USB Blaster and a JTAG chain scan was initiated. The nine boundary-scan compatible devices connected to the chain was identified and a request to provide BSDL files to the devices was presented. See Illustration 2 "TopJTAG, add BSDL files".

Illustration 2: TopJTAG, add BSDL files

When the BSDL files were added, E3018.bsd for the Ethernet controllers and xc6slxt_fgg484.bsd for the FPGA controller, they are presented graphically as the pin connectors of the physical device. The pins are coloured according to the state they are in. See Illustration 3 "TopJTAG, device view with BSDL files added".

Illustration 3: TopJTAG, device view with BSDL files added 2.2.3 Running tests When the devices are configured, they can be used in the three pin modes or bypass, see Illustration 5 "TopJTAG SAMPLE view pin state log view on FPGA clock". The modes are: • INTEST: Internal test mode, BITS (Pin permission) • EXTEST: External tests(pin permission) • SAMPLE: Listening mode (Non invasive) • BYPASS: Forward the boundary-scan chain traffic

When the boundary-scan is used without a PCB nets-list any pin accessible to the boundary-scan logic can be set to any state. This can be devastating if for example a pin connected to VCC is set as output, low, which would short circuit the IC. Every change made in the pin permission mode have to be carefully thought through by manually reading the PCB nets-list and evaluating which states the pins can be set. A safe way to test grounded nets can be to set the pin connected to the net to resistive high (ZHIGH), trying to raise the voltage level to logical 1, using the built in pull up resistor of the IC. If the operation succeeds the net can most certainly be raised using the normal logic high (HIGH). The colours of the pins are blue for low (GND), red for high(VCC) and black for non accessible pins such as power and boundary-scan JTAG pins.

Illustration 4: TopJTAG pin mode select By Setting the IC in SAMPLE mode the pins are polled continuously. In this mode, the clock pin can be identified by the fluctuating blue and red colour. By observing that state high and state low is about 50% each, confirms the clocks functionality. This can be done by applying a state log on the pin. Several pins can be selected for state logging. See Illustration 5 "TopJTAG SAMPLE view pin state log view on FPGA clock".

Illustration 5: TopJTAG SAMPLE view pin state log view on FPGA clock

3 Result TopJTAG is a simple and intuitive application that provide a manual method of testing the connections on the PCB. By importing BSDL files the program presents the boundary-scan devices in a graphical way, similar to the physical design. The program uses the IEEE1149.1 but does not provide any way of running ISP programming which is partly supported in the standard. This feature is provided in the "TopJTAG Flash programmer" [TOPJ1] which is distributed separately. The program does what is expected in a simple and understandable way.

4 Conclusions The ability that basic boundary-scan provide to reach and influence pins on IC's that normally would not be crossable, is supported in this test suite. The alternate procedure to do the same tests does not completely exist. The closest alternative is to implement test code on the micro-controller or FPGA that changes the states of the pins. Such actions are most likely not worth the time and unlikely to be permitted in case of a fault analysis. Working with error identification on a faulted PCB becomes a lot faster since the logical testing can be done entirely in a GUI by selecting the pins on the BS devices and setting a desired state, no oscilloscope and probe positioning is required.

References ALTE1: Altera Corporation, , 2009, http://www.altera.com/literature/ug/ug_usb_blstr.pdf TOPJ1: TOPJtag, TOPJtag flash programmer, 2011, http://www.topjtag.com/downloads/ APPENDIX 10: DESIGN EVALUATION MEETING

KTH, Master Thesis, Gustav Leesik, Magnus Dormvik 29 Sep 2011 Meeting minutes Attending: Gustav Leesik, Magnus Dormvik, Stefan Ohlson, Johan Henriksson, Mikko Salomäki . Absent: Jeanette Fridberg Date and time: 29 Sept 2011 10:00 – 10:30 Agenda 1. Present system design 2. Evaluate system design 3. Other Notes Conclusions Development notes: Test-execution framework will probably also be run on PC and send command to serial and parse answer over serial and log to PC. Proposition on parse-function DIFF to easily compare outputs. Test-scripts may be used as automated installations as well. Example: echo “install”mac-address from MAC-list-repository to target DUT and verify the operation. The test-execution-framework will probably not be used to “trial and error” make devices work. An alternative to logging this is to extend logging on “hyper-terminal” . When the device is working, the test-execution-framework can be used to verify this with the same command used to verify the function used during the trial and error period. Operations notes: Continue to assume that the test will be run from USB in operations environment. A USB bootable test operating-system would be nice. General: The FTP option was presented and neglected. We are on track and could continue as planned. KTH, Master Thesis, Gustav Leesik, Magnus Dormvik 29 Sep 2011 APPENDIX 11: C++ CODE The C++ code is provided as main.cpp, main.h and classes.cpp in the Appendix folder provided with the report. If such folder is not provided please contact the authors. APPENDIX 12: USER MANUAL Done by Development

1. Use a PC to copy the Test-Program (TEX) from SVN to the root of a portable media (e.g. USB flash drive)

SVN /SVN/Test- repository/

PC TEX USB

|======| | Test Execution Framework V0.1 | | | 2. Run Test-program from flash drive | --Last run configuration-- on a PC and choose menu option | User: NONE ”Create new product test” to | Directory: NONE | The folloing options are avaiable | create product folder tree and | | config files on flash drive. | 1. run selected product test | Program will prompt for product | 2. run single device-test | | 3. change operator | info. | 4. change product test | | 5. modify product test | | 6. create new product test | | 7. view current product test config | | 8. help | | 0. quit |

select number :

Spaces are not allowed anywhere except in product description and in the regular expression

/ USB drive root TEX.exe LastRun.conf

/ PS700_demo Enter name for product test folder DUT.conf main.log PS700_demo Manually add device test folders to the folder /tests PS700_demo/tests/ press any key and to continue

/logs 3. Copy all test files needed from SVN and put them in /”product folder”/tests/ on USB-drive SVN

/SVN/”Product name”/Test/ /SVN/Test-repository/

PC USB

4. TEX will add the tests to the file DUT.conf

Enter name for product test folder PS700_demo Manually add device test folders to the folder PS700_demo/tests/ PC press any key and to continue USB y

Test folders found: Directory USB date picture ping serialnumber

Scan complete 1: Continue 2: Re-list

Add/remove folders in OS’s file-handler and re-list to add/remove tests. 5. Enter info about the DUT

... ping serialnumber PC Scan complete USB 1: Continue 2: Re-list 1

Enter PRODUCT_NAME:PS700 Enter PRODUCT_REV:1.2 Enter TEST_VER:0.2 Enter DESCRIPTION:demo test

6. DUT.conf for the new product test is now created, and is printed on the screen

---::: DATE and SERIALNUMBER must be the first two tests to run!! :::---

/ USB root TEX.exe LastRun.conf PC / PS700_demo USB DUT.conf main.log

/tests

... Enter DESCRIPTION:demo test

PRODUCT_NAME:PS700 PRODUCT_REV:1.2 TEST_VER:0.2 DESCRIPTION:demo TESTDIR:date /logs TESTDIR:serialnumber TESTDIR:Directory TESTDIR:USB TESTDIR:picture TESTDIR:ping

7. Select the new product test to make TEX load it

… TESTDIR:picture PC TESTDIR:ping USB

Select test number:

0 Cashguard_rev1.2_testv2.0 1 Georog_44 2 PS700 3 PS700_demo Select test number: 3 8. TEX is now ready to execute the product test

Quit and eject the USB-drive

|======| PC | Test Execution Framework V0.1 | USB | | | --Last run configuration-- | User: gkar | Directory: PS700_demo | The folloing options are avaiable | | | | 1. run selected product test | | 2. run single device-test | | 3. change operator | | 4. change product test | | 5. modify product test | | 6. create new product test | | 7. view current product test config | | 8. help | | 0. quit |

select number : 0

Press to close this window...

9. Mount flash drive on DUT and Run TEX from the USB-drive.

USB DUT

TEX 10. Connect loopback plugs or similar. 11. Select menu option 1. run selected product test

USB DUT USB DUT

12. If failed, info about the test that failed will be printed on screen. You have three choices.

§ USB DUT | | | FAIL MENU | | | | 1. continue testing, error is logged | | 2. run failed test again | | 3. clear log and exit to main menu | |======| select number 1-3:

13. If passed, log-files will be saved and you will be asked how to exit.

/ USB root TEX.exe LastRun.conf

/ ”product folder” DUT.conf main.log /tests

/logs

Product_sn_date.log 14.When the product test, including all Device tests needed for production-testing has been created. Do the following...

Any changes made to generic Test-Files are saved to the generic test repository in SVN.

Test-Files that are product-specific are uploaded to the test folder under the DUT’s folder in SVN. SVN

/SVN/”DUT”/Test/ /SVN/Test-repository/

PC USB

ARENA

/ USB drive root TEX.exe LastRun.conf

/ ”product folder” DUT.conf main.log

A complete copy of the /tests USB-drive file structure is uploaded to Arena. (including TEX and all device test folders) /logs lastRun.conf This is the name of the folder the product test is in PRODUCT_DIR:PS700 (no spaces) LAST_USER:gkar User name (no spaces) SERIAL:123456789abcd DATE:2012-12-12 Serial number of the most recently tested DUT

Date of testing, format can be chosen in the file test.conf belonging to the date test

Product name, does not have to be the same as folder name (spaces are OK)

Product revision (no spaces) DUT.conf

PRODUCT_NAME:PS700 Test version (no spaces) PRODUCT_REV:1.2 TEST_VER:0.1 DESCRIPTION:PS700 test to test test Description of product test, TESTDIR:date (spaces are OK) IMPORTANT TESTDIR:serialnumber date must be the first test in TESTDIR:Directory the list. TEX will be angry TESTDIR:ping otherwise TESTDIR:USB IMPORTANT TESTDIR:picture serialnumber must be the second test in the list. TEX will be angry otherwise

This file is automatically Device tests (TESTDIR:) generated when running the These are the names of the folders of the device tests. function ”Create new product (date and serialnumber are also ”device tests”) (no test” in TEX. spaces)

It can still be copied and edited manually

Folder name of last run product test lastRun.conf PRODUCT_DIR:PS700 Last user/operator LAST_USER:gurra Serial number of last tested SERIAL:999999 product DATE:20100101

Date when last test was performed

This file is automatically generated by TEX. It is still possible to manually edit it. Device test name, does not Description of device test, have to be the same as can be used as instructions folder name (spaces are OK) to operator as it is printed on screen when running (spaces are OK) test.conf Command that is run as if it TEST_NAME:date was run from the command DESCRIPTION:enter date YYYY-MM-DD line interface (terminal) CMD: Here it is empty Where the output from the command should go. stdout = printed to screen REFERENCESOURCE:stdin stdin = TEX waits for a keyboard entry CONDITION:regexp_date filename.xxx = push output to file ”filename.xxx” REFERENCE:20[0-9]{2}-[0-1][0-9]-[0-3][0-9] ENCODING:IBM 850 Defines how the output should be formated for the regular expression regexp_date = defines that it is a date (one device test must use this) regexp_serial = defines that it is a serial number (one device test must use this) regexp_inline = the regular expression is matched line by line (typically used if you are searching for a single occurance of a phrase in the output from the device test) regexp_infile = the regular expression is matched to the whole output as if all output was a sinle line. (typically used if you want to find a specific number of occurances of a phrase)

Regular expression Text encoding varies depending on application and platform. This is a kind of language used to match phrases. See list for common combinations at the end of this manual See cheatsheet at the end of this manual or google ”regexp”

The device tests ”date” and ”serialnumber” are mandatory. They have to be the first two tests to be run to make TEX proceed with testing. These aren’t really tests, but they behave like tests and are entered in the log as tests. This is because it is convenient to treat all in- and outputs the same way and the format of date and serial number can be modified using the regexp reference.

Device test name, does not have to be the same as folder name (spaces are OK) test.conf TEST_NAME:serial number DESCRIPTION:enter serial number CMD: REFERENCESOURCE:stdin ”Stdin” tells TEX to wait for input from the keyboard, followed by . This input is then matched by the CONDITION:regexp_serial regexp as if it was the output from a script or program. REFERENCE:([a-zA-Z0-9-]+) (Hence the confusion of ”command out” being ”standard ENCODING:IBM 850 in”)

Regular expression This is a kind of language used to match phrases. See cheatsheet at the end of this manual or google ”regexp” This regexp matches one or more alpha numerical characters diskpartscript.txt ”list volume” lists all disk volumes on the machine, including list volume removeable drives exit

Test name Description of device test

”diskpart” is a program that is built in in Windows XP, Vista and 7 test.conf ”/s” a switch for having diskpart run a script TEST_NAME:disc mount (incl. USB) ”diskpartscript.txt” tells diskpart what to do DESCRIPTION:prints list of volumes CMD:diskpart /s diskpartscript.txt ”stdout” prints output to standard out (usually the display) REFERENCESOURCE:stdout ”regexp_infile” makes TEX the multi line output CONDITION:regexp_infile into a single line (string) (e.g it removes all ”endline” REFERENCE:((.+Flyttbar.+){2})|((.+Removeable.+){2}) characters) ENCODING:IBM 850 Regular expression matches two occurances of the phrase ”Flyttbar” or two occurances of the phrase ”Removeable” In this test it will tell if there are two USB-flashdrives attached and mounted. ”.+” matches one or more (+) of any character (.) See regexp cheat sheet

”IBM 850” is the encoding used for cmd in Windows

Jumpers.jpg

Test name Question asked to the operator when test is run

test.conf TEST_NAME:show picture of jumpers DESCRIPTION:Are the jumpers set as in picture? y/n ”cmd” is the terminal in windows CMD:cmd /C jumpers.jpg ”/C” a switch for cmd, see ”cmd /?” ”jumpers.jpg” opens a picture, like just writing REFERENCESOURCE:stdin jumpers.jpg in the terminal CONDITION:regexp_inline REFERENCE:y ”Stdin” tells TEX to wait for input from the keyboard, followed ENCODING:IBM 850 by . This input is then matched by the regexp as if it was the output from a script or program. (Hence the confusion of ”command out” being ”standard in”)

”regexp_inline” makes TEX match the regexp to each line of the output from the device test.

”y” is the regexp pattern that is searched for. Here it is the answer from the operator, that he/she has put the jumpers where they shoul be and is ready to continue with the test.

”IBM 850” is the encoding used for cmd in Windows Using TEX in production

1.The test bundle (TEX and the folder structure including all device tests) made for production-testing is downloaded from Arena to a USB-drive or similar.

ARENA

PC USB

2. Mount flash drive on DUT and Run TEX from the USB-drive.

The logs are saved on the USB-drive

main.log accumulates data from every test that DUT USB has been run with this USB-drive TEX

3. There is an option to run just a single test for trouble-shooting

USB DUT

TEX TEX_manual_page_ 11 APPENDIX 13: SOFTWARE USED

Program / Software Purpose

Filezilla (www.filezilla.org) Move code to Linux platform

Ganttproject (www.ganttproject.biz) Used to create time plan

OpenOffice (www.openoffice.org) Used to write mater thesis report

Python (www.python.org) Used to evaluate programming language

MinGw (www.mingw.org/) Used to evaluate programming language

Qt Creator (www.qt.nokia.com) Used to create the TEX application

Usb-creator (www.pendrivelinux.com) Used to install xubuntu xubuntu Used to test TEX on linux platform

GoJtag(www.gojtag.com) Used to evaluate Boundary scan functionality

Xjtag(www.xjtag.com) Used to evaluate Boundary scan functionality APPENDIX 14: TEST SYSTEM SPECIFICATION Test-platform Data Respons

Request for quotation

Automated test-platform

Purpose: Present the requirements for Data Respons to quote the development and delivery of a software program for automated test of embedded system

Revision history:

Rev. no. Date Author Amendment Details 1 11-08-31 Magnus Dormvik 1,1 11-09-15 Magnus Dormvik

1 References and glossary

Number Document name Document identity Version 1 General Requirements Core SR Automated test- 1.0 platform

1.1 Glossary BSDL Boundary Scan Description Language TAP Test Access Port DFT Design For Test DUT Device Under Test ISP In System Programming PCB Printed Circuit Board ICT In Circuit Testing PXE Preboot Execution Environment OS Operating-Systems 2 Introduction

2.1 Background information This document describes the requirements for a generic test-platform to be used by Data Respons. The main target for this document and product is the operations and development department at Data Respons Stockholm. With this document the Master Thesis students engaged in developing this solution wish to be as clear as possible with what will be developed during this project. 2.2 System Description A test-platform map of the complete test solution as seen from the pre-study. The image show the main blocks and their dependencies.

Legend External port logger Part of master thesis Existing parts Log File Further work Automated test-program Individual studies For DUT OR (OS independent)

SQL test-log database OEM OS for Multi-platform device under test test OS

Jumpstart server Boundary-scan Electronic with OS and Bootable USB stick Internal storage test-system load Program installation

DUT (Device Under Test)

Illustration 1: Test-system dependencies

The different parts of the test-platform exist in physically different parts of the company. The test- program is a moveable software designed to run simultaneously on multiple machines. The Test-OS is a PXE bootable jumpstart OS provided as a service in the operations network. The Hardware tester is a hand-held digital logging device plugged in to the DUT. The Boundary-scan software is a laptop software with JTAG bridge plugged in to the TAP of the DUT.

The automated test-program is the main hub in the solution. This is a low system requirement, multi platform CLI program used as a framework for executing lists of test code written for the specific platform and devices and sending log-files to a test-file area located on the network or to a SQL test-log database.

The hardware tester is a hardware tool for measuring voltage-levels on custom I/O ports. This is done with a changeable port-connector that allows the testing of different connectors such as DB9 and RJ45 running different protocols such as RS232 and Ethernet. Built in current-loading is used to load the port according to specification. Voltage-levels are measured and communicated to the Automated test-program logging-system. The boundary-scan test-system is a JTAG TAP to USB bridge with software for generating automated Boundary-scans from PCB net-list.

3 Services to be provided

3.1 Required hardware In order to develop the test-platform at least two different DUTs are required for test and development. The DUTs should include common I/O such as Serial RS232, USB, Ethernet.

3.2 Required software

• Software to access RMA statistics • Development tools for chosen programming language • OEM OS for DUT • Test-scripts for device test of DUT 4 External port logger

4.1 Functional requirements Hardware tester

• Shall connect and communicate with DUT via USB • Shall be able to send data to test program on DUT • Shall be able to measure voltage levels of common ports. • Shall be able to draw nominal current from common ports to simulate long cable and load. • Shall be possible to add new ports • Shall be designed for a fairly large number of test ports. • Shall be able to choose which ports are activebe able to measure frequency • Should be designed for hard physical treatment • Should be able to measure and log current drawn by DUT in real time. • Should be able to measure and log temperature with probe in real time. • Should be configurable to create custom tests with aspect to voltage-level, current-level, time and frequency

Example of measurements: • Test serial RS232 levels • Test serial RS485 levels (RMS voltage at max current?) • Test Ethernet levels (RMS voltage at max current?) • USB speed and 500mA load on all ports • VGA levels (RMS voltage at max current?) • Power consumption

4.2 Non Functional requirements • Shall be able to calibrate the logging tool • Shall be mobile • Shall be powered via USB • Shall be easy to use • Shall be reproducible

• Should be in a happy colour so people get happy when they see it • Should be quick to connect • Should be designed for electric overload

5 Automated test-program

5.1 Description The Automated test-program is a multi-platform, low system requirements application capable of running command-line programs and parsing the answer from command-line or from file.

5.2 Functional requirements

• Shall provide an automated way to run tests on commonly used ports and devices. • Shall generate relevant logs and automatically save them to an appropriate location for archiving. • Shall be able to generate test reports tailored to the devices being tested. • Shall be able to run on systems with no monitor “e.g. terminal mode” • Shall be able to load system configurations for different systems and handle versions of configurations. • Shall be possible to upgrade the test program to run on new hardware / OS. • Shall run on Windows Embedded / XP / 7 and Linux. • Shall be able to execute command after other task has finished • Shall be able to execute task after given moment in time. • Shall provide an convenient way of logging non automated test-procedures. • Shall provide automated or manual way of entering DUT serial number • Shall provide automated or manual way of entering date and time • Shall provide manual way of entering test-person.

• Should be able to list common devices and run test according to test requirements. • Should be able to push log-files to pre-developed SQL log-file database • Should be able to load preconfigured custom tests from repository.

5.3 Non Functional requirements

• Shall run on ultra-low disk-storage test-platforms ( minimal 10 MB ) • Shall be easy to maintain. • Shall be easy to create automated testing of new systems. • Shall have a settings menu with required options. • Shall be able to run on systems with minimal system RAM ( minimal 32 MB) • Shall be able to configure settings regarding log-files location and logging-level.

5.4 Limitations • Shall not include any built-in tests, only a framework to execute predefined test from CLI. • Shall not include a GUI.

6 Documentation requirements

Required: • User manual • Guide regarding test-script generation • Code-blocks explanation