Linking Mixed-Signal Design and Test

Generation and Evaluation of Specification-Based Tests

Nur Engin Samenstelling promotiecommissie:

Voorzitter: Prof.dr. W.E. van der Linden, Universiteit Twente Secretaris: Prof.dr. W.E. van der Linden, Universiteit Twente Promotor: Prof.dr. H. Wallinga, Universiteit Twente Ass. Promotor: Dr.ir. H.G. Kerkhoff, Universiteit Twente Referent: Dr.-Ing. M.J. Ohletz, Alcatel Microelectronics Leden: Prof.dr.ir. B. Nauta, Universiteit Twente Prof.dr.ir. T. Krol, Universiteit Twente Prof.dr.ir. W.M.G. van Bokhoven, TU Eindhoven Prof.dr.ir. J.L. Huertas Diaz, University of Sevilla

Title: Linking Mixed-Signal Design and Test Generation and Evaluation of Specification-Based Tests Author: Nur Engin ISBN: 90-3651494-0

Printed by Febodruk B.V., Enschede c N. Engin 2000

LINKING MIXED-SIGNAL DESIGN AND TEST

GENERATION AND EVALUATION OF SPECIFICATION-BASED TESTS

PROEFSCHRIFT

ter verkrijging van de graad van doctor aan de Universiteit Twente, op gezag van de rector magnificus, prof. dr. F.A. van Vught, volgens besluit van het College van Promoties in het openbaar te verdedigen op vrijdag 29 september 2000 te 16.45 uur.

door

Nur Engin geboren op 16 september 1970 te Ankara (Turkije) Dit proefschrift is goedgekeurd door de promotor, Prof. dr. H. Wallinga, en de assistent promotor, Dr. ir. H.G. Kerkhoff. Was man nicht versteht, besitzt man nicht.

GOETHE

Contents

Summary vii

Acknowledgments ix

Symbols xi

Abbreviations xiii

1 Introduction 1 1.1 Testing: The Hidden Challenge ...... 2 1.2 Analog and Mixed-Signal Testing ...... 3 1.2.1 Analog vs Digital Testing ...... 3 1.2.2 Analog/Mixed-Signal Challenges ...... 5 1.3 Motivation and Problem Definition ...... 9 1.4 Outline ...... 11 1.5 Bibliography ...... 13

2 The Mixed-Signal Test Problem 15 2.1 Introduction ...... 15 2.2 Mixed-Signal IC Architectures ...... 16 2.3 Current Design and Test Practice ...... 18 2.3.1 Prototype Test ...... 18 2.3.2 Production Test ...... 20 2.4 Market Requirements ...... 24 2.4.1 Quality ...... 24

i 2.4.2 Cost ...... 28 2.4.3 Time-to-Market ...... 30 2.5 Mixed-Signal Test: Current Issues and Challenges ..... 31 2.5.1 Test and Measurement Environments ...... 31 2.5.2 Design for Testability (DfT) ...... 32 2.5.3 Built-in Self-Test (BIST) ...... 33 2.5.4 Automatic Test Program Generation (ATPG) .... 35 2.5.5 Test Program Evaluation ...... 37 2.6 Trends in Mixed-Signal IC Design ...... 38 2.7 Trends in IC Technology ...... 40 2.8 Design-Test Link ...... 40 2.9 Conclusions ...... 42 2.10 Bibliography ...... 42

3 A General Framework for Mixed-Signal Test Generation and Testing 49 3.1 Introduction ...... 49 3.2 Design and Test Flow: The Current Status ...... 50 3.3 High-Level Considerations ...... 51 3.4 Macro-Level Considerations ...... 56 3.5 Simulation Support for Test ...... 58 3.5.1 Simulation-Based Test Generation ...... 58 3.5.2 Virtual Testing ...... 60 3.6 Reuse Issues in Design and Test ...... 63 3.7 State of the Art in Design-Test Link ...... 63 3.7.1 Virtual Test Software ...... 64 3.7.2 Test Plan Generation Tools ...... 65 3.7.3 Test Program Evaluation Tools ...... 66 3.8 A General Framework for Design-Test Link ...... 66 3.8.1 Definitions ...... 68 3.8.2 Design and Test Methodology ...... 69 3.9 Conclusions ...... 71 3.10 Bibliography ...... 71

ii 4 MISMATCH: A Framework for Design-Test Link 75 4.1 System Overview ...... 75 4.2 Integrated Design and Test Flow ...... 77 4.3 Test Database ...... 77 4.4 Design for Testability ...... 80 4.5 Design Database ...... 81 4.6 MISMATCH CAD Data Flow ...... 82 4.6.1 Usage of Simulation Results ...... 83 4.6.2 Test Set Selection ...... 86 4.7 MISMATCH CAT Data Flow ...... 88 4.7.1 Test Control Signal Generation ...... 90 4.7.2 Automatic Routing for the Test Set ...... 90 4.7.3 Test Generation for Mixed-Signal Macros ...... 92 4.7.4 Test Generation for Digital Macros ...... 93 4.8 An Example: Design and Test of a Compass Watch ..... 94 4.8.1 IC Overview ...... 94 4.8.2 Analog Functionality and the Test Library ..... 96 4.9 Test Results ...... 105 4.9.1 Digital Parts ...... 105 4.9.2 Analog Parts ...... 107 4.10 Discussion of Experiences ...... 108 4.11 Conclusions ...... 110 4.12 Bibliography ...... 112

5 Defect-Oriented Test Evaluation for Analog Blocks 115 5.1 Introduction ...... 115 5.2 General Test Selection Criteria ...... 116 5.3 Specification Coverage vs. Fault Coverage ...... 116 5.4 Manufacturing Defects and IC Faults ...... 118 5.5 Layout Based Fault List Extraction ...... 121 5.5.1 Critical Area ...... 121 5.5.2 Fault Probability Calculations ...... 122 5.6 Analog Fault Simulation as a Test Evaluation Method ... 124 5.7 Problem Definition ...... 125

iii 5.8 Standard Methods in Circuit Simulation ...... 127 5.9 Simulation Complexity ...... 132 5.9.1 Evaluation of Device Models ...... 134 5.9.2 Solution of Linear Equations ...... 135 5.9.3 Number of NR Iterations ...... 135 5.10 Fault Simulation: Overview of Existing Methods ...... 137 5.10.1 Methods for Linear Circuits ...... 139 5.10.2 Methods for Solving Sets of Linear Equations...... 139 5.10.3 Other Methods ...... 148 5.11 Conclusion ...... 149 5.12 Bibliography ...... 151

6 A New Approach Towards Analog Fault Simulation 155 6.1 Introduction ...... 155 6.2 Simulator Requirements ...... 156 6.3 DC-Bias Grouping ...... 157 6.4 One-Step Relaxation Method ...... 163 6.5 Grouping Methods ...... 165 6.6 Partial LU Update ...... 166 6.7 Parallel Simulation ...... 170 6.8 Implementation and Results ...... 170 6.9 Conclusions and Future Research ...... 178 6.10 Bibliography ...... 180

7 Conclusions and Recommendations 185 7.1 Summary of Results ...... 185 7.2 Original Contributions of This Thesis ...... 187 7.3 Recommendations ...... 187

Appendices 189

A Example of a Virtual Instrument 191 A.1 Front Panel ...... 191 A.2 Diagram ...... 192

iv B Mixed-Signal Test System: An Overview 195 B.1 Digital Tester Part ...... 195 B.2 Analogue Tester Part ...... 196 B.3 Bibliography ...... 197

C Verification Test Results for the Compass Watch Buffer Macro 199

Samenvatting 201

About the Author 203

v vi Summary

The work described in this thesis is aimed at the exploration of new methods for the integration of design and test development procedures for mixed- signal integrated circuits (IC’s). Mixed-signal IC’s are currently found in many electronic systems, including telecommunications, audio and video instruments, automotive parts, etc. The testing of these IC’s presents problems due to the complex nature of analog functionality and the non- automated analog design process. Automatic generation of test programs for analog parts is a problem which is not yet fully solved. Once a test is generated, formal methods to ensure the quality of developed tests do not exist or have a large overhead. Systematic links between design and test development processes of analog and mixed-signal circuits are required to improve these points and to ensure high quality and low time-to-market (TTM) for mixed-signal IC’s. The various aspects of the mixed-signal test problem are discussed in chap- ter 2 of this thesis. A general discussion of the present mixed-signal design and test practice, the implications of market requirements on the test methodology, and the future challenges for mixed-signal testing is pre- sented. It has been concluded that long test development and debugging time and lack of methods for realistic test evaluation are the main chal- lenges in mixed-signal testing at present. The discussions in chapter 3 are aimed at defining a general framework for the integration of mixed-signal design and test activities along the IC development traject. For this, the test considerations at various levels of abstraction are given, and the present state of the art and possibilities in virtual testing and design-test link tools are presented. At the end of this chapter, a general framework for linking mixed-signal design and test is presented. In chapter 4, an environment for the integration of design and test flows for the prototype test of mixed-signal IC’s is introduced and implemented. The described environment (MISMATCH) is based on the sharing of de- sign data with the test environment for generation of specification-based

vii prototype tests during the design steps. The functionality consists of the selection of test methods from a test library, usage of macro specifications to select test functions, generation of control signals for access to the tested parts and addition of information for automatic routing to required tester modules. MISMATCH has been implemented using existing design and test frameworks and the resulting system has been used for the design and test traject of a mixed-signal IC. The corresponding test results are presented. In chapter 5, an important aspect of the generated tests, the test effec- tiveness, is discussed. The manner in which tests have been generated in the MISMATCH framework as described in chapter 4 guarantees only that certain parameters are measured, and not that the process-related failure possibilities are covered. In order to be able to have high IC quality figures, tests have to address the cause and not only the consequence of defects. For this reason, the link between the IC manufacturing process and test quality is investigated in this chapter. The chapter concludes that decreas- ing the fault simulation time is one of the biggest challenges in evaluating the quality of mixed-signal tests. A discussion of the existing methods for decreasing the fault simulation time is presented. The aim of decreasing the fault simulation time is pursued in chapter 6, where a new method for the efficient fault simulation of mixed-signal cir- cuits is presented. This method is based on decreasing the complexity of fault simulations by using parallel simulation techniques. A prototype fault simulator is implemented for checking the speedup of the new method. This implementation uses resistive bridges as fault model, although the method is also applicable to other fault models. The experiments made using the prototype simulator are presented in the chapter. The results show that an improvement of 30% is possible compared to the conventional simulation methods.

viii Acknowledgments

The efforts of a number of people have been vital for the existence and quality of this thesis. I owe gratitude to all colleagues and friends who have a part in making the last four and half years a nice time, and who have encouraged me to keep the good spirit even in the ‘less nice’ times. Specifically, I would like to thank:

Han Speek, for his indispensable help for the implementation and • improvement of the fault simulation software, and for his constant (moral and technical) support and constructive criticism during the writing of this thesis, Ronald Tangelder, for being a resourceful and enjoyable discussion • partner (be it about IC testing, European history or the touristic spots in Germany!), and for his detailed and valuable feedback on the earlier versions of this thesis, My supervisor Hans Kerkhoff, for introducing me to mixed-signal • testing and for laying the conceptual foundations of the MISMATCH framework, My promotor Hans Wallinga, for his support especially during the • last year and the writing of this thesis, Commission member Michael Ohletz, for his detailed remarks on the • earlier version of this thesis which have contributed a great deal to the quality of the final version, Nico Csizmadia, Taco Zwemstra, Yizi Xing and Harry Bremer of • Philips Semiconductors, discussions with whom have helped me to see ‘the practical face of mixed-signal testing’, and Nico Csizmadia also special thanks for the nice and useful time during my practical training in Nijmegen, Students Johan Wesselink and Ercan Yılmaz for implementing the • conversion functions for the MISMATCH framework, and Kitty van Nee and Pieter Jan Bouma for making parts of the test library, and Metin T¨umkaya for his efforts towards implementing MISMATCH,

ix Cor Bakker for his high quality maintenance of the PC network and • fast and friendly support with all PC problems, Marcel Weusthof and Henk de Vries for their technical support during • the test experiments, Egbert Holl, for the maintenance of the HP-network, • The administrative staff of the department ICE and later the teaching • chair SC, the secretaries Margie Rhemrev, Marie-Christine Pr´ed´ery and Mariska Buurman, and the financial administrators Sophie Kreulen and Joke Vollenbroek: thanks for keeping things organized and the work atmosphere enjoyable, My office mates Victor Kaal, Jan Harm Nieland, Milan Stancic and • Liquan Fang, for the nice and interesting discussions (in a wide spec- trum including goats, computers, photography, movies, food, politics, and common words in Serbian and Turkish): the good mood in the office was sometimes crucial in ‘keeping me going’! Gidi Kroon, for sharing all kind of good and bad events of the last • years, and for suggesting improvements on some parts of this thesis, Gidi Kroon and Milan Stancic, for accepting to be my ‘paranimfen’. • Last but not the least, I would like to thank family members who have meant so much for me with their encouragement and support: My parents and Elif: thanks for the ‘eat-a-lot-laugh-a-lot’ weeks in Turkey which made me go back to work in good mood, my mother, thanks for emailing me every day and worrying so much about me that nothing is left for me to worry about myself. Canım anneci˘gim,babacı˘gımve Elif: te¸sekk¨urler! Ans, Jaap and Raph, thanks for your unconditional support and friendship, and Jaap for calling me kızım! And Marc, thanks for going through this adventure with me, putting up with me at those times when I was not exactly gezellig, for feeding me and cheering me up, for keeping me up to date with the news about Turkey, for making me understand the ‘Dutch way’ and feel at home, for believing in me at all times and under all circumstances, and ... for translating the summary into Dutch!

Nur Engin Eindhoven, September 2000

x Symbols

x scalar x vector X matrix x(k) kth iteration of x x∗ exact solution of x x˜ estimated solution of x x˙ time derivative of x x absolute value of the scalar x | | x Euclidean norm of the vector x | | x S x is element of set S ∈ S cardinality of set S | | set of real numbers < n set of real vectors of length n (n-dimensional space) < n n set of real matrices of size n n < × × p number of combinations of r objects out of p objects r F (.) scalar function F(.) vector/matrix function [0,T ] closed interval between 0 and T

xi xii Abbreviations

ADC Analog to Digital Converter ATE Automatic Test Equipment ATPG Automatic Test Pattern Generation AWG Arbitrary Waveform Generator BIST Built-In Self-Test CAD Computer-Aided Design CAT Computer-Aided Test CUT Circuit Under Test DAC Digital to Analog Converter DfM Design for Manufacturability DfT Design for Testability DIB Device Interface Board DOT Defect Oriented Testing DSP Digital Signal Processing DUT Device Under Test EDA Electronic Design Automation IC Intergrated Circuit IP Intellectual Property GPIB General-Purpose Interface Bus MISMATCH MIxed-Signal MAnipulation Tool for CHips MNA Modified Nodal Analysis MUT Macro Under Test NR Newton-Raphson (iterations) PLL Phase-Locked Loop PPM Parts Per Million PWL Piecewise Linear SNR Signal-to-Noise Ratio

xiii STIL Standard Test Interface Language THD Total Harmonic Distortion TTM Time-To-Market VI Virtual Instrument VHDL VHSIC Hardware Description Language VHSIC Very High-Speed VME Virtual Machine environment VXI VMEbus eXtensions for Instrumentation

xiv Chapter 1

Introduction

When the historians of the future will have to name our time, they surely will have a large number of names to choose from: internet age, connec- tivity age, ubiquitous communication age ... Whatever their choice will be, there is no doubt that the revolution in communications is one of the most representative characteristics of the last two decades. With the number of internet connections doubling every year [Esta99], the internet has become far more commonly used than could be expected 10 years ago. We have seen a similar speed in the acceptance of mobile telephony by large masses, following the decrease in the price for buying and using cellular phones. The place and importance of electronic appliances in our daily lives is get- ting larger at a fast pace. Looking ahead, it is possible to anticipate some of the technological leaps to come in the near future. A series of new applications is already beginning to bring together the internet, mobile communications and consumer electronics platforms. Power management applications will have to improve to keep the mobile applications within these platforms usable. These developments will result in many changes in the way people live, work and communicate. The human-computer in- terface will become more natural by developments in speech recognition techniques. Improvements in the quality of automotive electronics, home entertainment systems and office peripherals will certainly continue. As a result, many aspects of human life, some of them ‘safety-critical’, are controlled by electronic systems. Considering that most of the data processing that occurs in communica-

1 2 CHAPTER 1. INTRODUCTION tions and computers is digital, it seems as if analog design in integrated circuits (IC’s) becomes less common. However, one of the few common points among the diverse application domains mentioned above is that they require mixed-signal IC’s for their key functionality. Systems for mo- bile telephones and wireless networks require transmitters, receivers and power management circuitry. Multimedia applications require conversion into analog forms at the back end, speech recognition has to acquire speech as an analog signal, and internet connections via digital subscriber line (DSL) IC’s acquire and transmit data via an analog front end (AFE). It is possible to increase the number of examples. The fact is that we live in an analog world, and the more electronics becomes a part of our daily lives, the more common mixed-signal IC’s will become. A recent market analysis shows that in the near future the main driving force behind the semiconductor industry will not be the computer industry any more, but communications [McCl00]. Communications is an area which is based on RF and mixed-signal electronics, and this fact sets a good background to contemplate on challenges in producing high-quality, low-cost mixed-signal IC’s.

1.1 Testing: The Hidden Challenge

Whether a given functionality can be designed and produced into an IC brings almost always design and manufacturing challenges into mind. A third group of challenges is almost always overlooked and heavily underes- timated. This group consists of the challenges related to testing the IC in order to guarantee its quality. When the IC market is considered, the customers are the system developers. For the developed IC to function correctly on a system board, important technical features have to be verified before it is shipped to the customer company. In the examples given in the previous section, many important technical features such as speed (e.g. networking applications) and output signal distortion level (e.g. audio applications) are key to the performance of the IC’s and so the verification of these features are always required by the sale contract. In general, testing challenges can be classified as test software-, test hardware- and access-related challenges. Test software includes test generation, which 1.2. ANALOG AND MIXED-SIGNAL TESTING 3 is the combination of procedures/algorithms for finding good test signals, i.e. signals by applying which we can almost surely know whether the IC will function correctly. Test hardware is the hardware framework that is responsible for applying test signals and measuring the IC response to these signals. The performance features of the test hardware such as speed, noise figures, signal sourcing and measurement ranges, etc. have to fit the speci- fications of the tested IC, or, the device under test (DUT). And finally, the design should be adapted so that the IC nodes that have to be measured for testing are accessible.

1.2 Analog and Mixed-Signal Testing

Historically, digital and analog testing have developed at very different paces, causing analog test methodology to be in a far earlier stage today than its digital counterpart. CAD tools for automatic test generation and test circuitry insertion are available already since two decades for digital cir- cuits. The main reason for this is the ease of formulating the test generation as a mathematical problem due to the discrete signal and time values. The distinction between what does and what does not work is crisp and clear for digital circuitry. For analog, on the other hand, the question can better be stated as ‘how good’ the circuitry works. Does an analog-to-digital con- verter (ADC) work correctly when it delivers a signal-to-noise ratio (SNR) of 69.99dB, when the specified minimum performance figure is 70dB? Does the system board application suffer from this underperformance? For ana- log designs, the definition of fault-free and faulty circuits is much more a matter of specification thresholds and sensitivity of application than a sharp distinction as in the case of digital circuits.

1.2.1 Analog vs Digital Testing

For analog circuitry, the generation of optimal test signals based on design topology is still not fully automated. As opposed to the digital approach based on the gate-level netlist, analog testing still relies mainly on a black- box approach, where the specifications of the circuitry are verified without paying attention to the structure or circuit layout. Another lagging issue is the usage of standard design methods to make the circuitry easily or better 4 CHAPTER 1. INTRODUCTION testable (Design for Testability, DfT). Scan chains are used in synchronous digital circuitry for this purpose. A comparable analog approach still does not exist. Similarly, the modeling of process defects and the usage of these models for developing and improving test signals is already standard prac- tice for digital circuits for more than two decades whereas similar methods for analog circuitry are just beginning to be applied by some manufacturers. For some of these issues, alternative approaches are still in research phase. For some others, there are technically feasible alternatives but the exist- ing production infrastructure and cost of changing present test methods is slowing down the acceptance of these methods. In table 1.1, a summary of fundamental differences between digital and analog testing is presented1. In table 1.2, the present situation in digital and analog testing is compared.

Digital Design and Testing Analog Design and Testing

Discrete signal values Continuum of signal values

Discrete time instants1 Continuous time No coupling between building blocks Coupling between building blocks can be significant Systematic design with synthe- Less systematic design, automatic sis based on hardware description synthesis at research phase languages1 High structural complexity, large Lower structural complexity, high number of elements functional complexity, small number of elements Absolute distinction of fault- Distinction of fault-free/faulty cir- free/faulty circuit cuit depends on tolerance values chosen Low sensitivity to crosstalk and Very high sensitivity to crosstalk noise and noise Low sensitivity to process parameter Higher sensitivity to process param- variations eter variations

Table 1.1: Fundamental differences between digital and analog design and testing

1The indicated points apply only to synchronous digital circuits 1.2. ANALOG AND MIXED-SIGNAL TESTING 5

Digital Testing Analog Testing

Automatic test generation based on No automatic test generation based circuit netlist on circuit netlist Standard DfT inserted automati- No standard DfT, ad hoc access cir- cally and used extensively cuitry Testing donimantly structural Testing dominantly functional

Standard fault models used No standard fault models used DfT does not commonly affect the DfT often affects the performance performance

Table 1.2: Current state in digital and analog testing

1.2.2 Analog/Mixed-Signal Challenges

1.2.2.1 Test Software-Related

The IC manufacturing process is neither deterministic nor fully controllable. Microscopic particles present in the manufacturing environment, slight vari- ations in the parameters of manufacturing steps and human errors can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Tests applied to an IC have to be able to discriminate between fault-free and faulty IC’s in a rea- sonable amount of time. If the test is not sufficiently effective, this can cause faulty IC’s to be delivered or fault-free IC’s not to be delivered. De- livering a faulty IC will result in decrease in the quality and reliability of the final product. Not delivering a fault-free IC, on the other hand, costs the IC manufacturer - and eventually the consumer - extra money. Conversely, if a test perfectly discriminates between faulty and fault-free devices but requires a long time for measurements and computation, then this also makes the IC more expensive since test time has a significant share in the overall IC costs. To summarize, the generation and evaluation of effective tests is a very important issue in the production of an IC that has direct consequences in the price and quality of the final product. 6 CHAPTER 1. INTRODUCTION

For digital circuits, algorithms for the generation of test patterns based on gate-level netlist exist since as early as 1960’s [Roth66],[Goel81]. With- out the so-called ATPG (Automatic Test Pattern Generation) methods, it would certainly be impossible to produce the large digital IC’s of the last twenty years at reasonable costs and quality. A similar test generation so- lution for analog testing became necessary with the increasing integration of analog and digital functionality on one chip. The analog test community has also been aiming at a solution comparable to that in digital, but the analog version of the problem is not solvable by similar analytical tech- niques. In the case of digital, the discreteness in time and signal values, the well-defined fault propagation paths and topological boundaries of fault influence have simplified the problem to some extent with respect to the analog case. As a result, algorithms have been developed based on calcu- lating the signal changes introduced by faults and logic rules to find input combinations to create changes between fault-free and faulty behavior (path sensitization) and propagate these changes to the primary outputs. A sim- ilar approach can not be applied to analog circuits. The main reasons for this are:

There are not only two choices of signal values to choose from, but in • principle an infinite number of signal values are possible. The choice between two specific signal values can cause better or less good results related to observing the fault at the outputs. The time variation properties of analog signals bring an extra dimen- • sion to the problem, since applying an AC, DC or transient test can be more or less efficient depending on the circuit and the targeted fault. It is not possible to make a one-to-one link between the function • and structure of analog circuitry such as in digital circuits. Given a particular topology, there is no general way of determining which part of the functionality is of interest and what the related performance limits are. The propagation of fault effects to the output is not possible in the • digital sense, because of two reasons. First, the effect of a fault can not be modeled to propagate in one direction as is the case in digi- tal. The fault effect propagates in all directions and the calculation 1.2. ANALOG AND MIXED-SIGNAL TESTING 7

of this propagation pattern becomes therefore much more complex than in the digital case. Secondly, in analog circuits, the information that a fault is present at a certain node does not readily comprise the signal value information for that node, making time consuming calculations of signal values necessary. Nonlinearity, loading between circuit blocks, presence of energy-storing components and parasitics further complicate these calculations.

The obstacles presented above have kept analog netlist-based (i.e. struc- tural) test generation from being applied in practice. Research is still done in this area, and the application of netlist-based test generation in prac- tice requires a breakthrough in terms of the computational costs and gen- eral applicability. Today, specification-based testing (checking whether the specifications are met) and functional testing (checking the functioning of the circuit with a standard input) are the dominating methods in analog testing. One point of view can be that, since the circuit specifications are the man- ner which fit more into the way of thinking about analog designs, it is necessary to formulate and think about the analog test generation problem in a completely different manner. A way of doing that would be linking it with analog synthesis techniques. Analog synthesis problem can be defined as ‘generating an analog netlist from a given function description’ (which actually happens routinely in the case of digital, by means of hardware description languages such as VHDL and digital synthesis tools). If this problem is solved, then this will open a possibility for solving the analog test generation problem in a similar way, since for this test signals need to be generated based on the netlist, which can be seen more or less as the reverse problem. Unfortunately, a satisfying solution to the analog synthesis problem has not been found to this day. The research on this subject [Kras99], [Deby98], [Leen91] has been going on for two decades already, and the results are still not so far that any analog block design can be made from a description [Ohr99]. So the alternative of solving the analog test generation problem based on these methods remains unfeasible for the time being. 8 CHAPTER 1. INTRODUCTION

1.2.2.2 Test-Hardware Related

The most significant developments in mixed-signal testing in the last two decades have been in mixed-signal test hardware. Despite this, the evo- lution in design, technology and integration levels has been so fast that a number of important problems for future products are expected to arise because of the insufficient accuracy in signal sourcing and measurement equipment. It is predicted in the 1999 edition of the International Tech- nology Roadmap for Semiconductors that tester parameters such as noise floor, clock speed and clock jitter will have to continue to be improved, al- though the existing techniques will fail to support this rate of improvement starting from as early as 2001 [SIA 99].

1.2.2.3 Access-Related

Until today, the semiconductor industry has been keeping up with the pre- diction made by Gordon Moore in 1965, that the functionality per chip will double every 1.5 to 2 years. This growth of the IC integration level is predicted to continue in the coming years, under the pressure of the semiconductor market to conserve the increasing functionality/price and performance/price ratios. The consequence of increasing integration is that less functionality is di- rectly accessible from the IC pins. The expected growth of transistors per IC pin in the coming years is shown in figure 1.1 [Zori99]. The result of this growth is that with every product generation there are more internal components and blocks which have to be tested by indirect access or by means of adding circuitry to maintain the possibility of access. Techniques of modifying the design to ensure testability for digital parts are fully auto- mated and standard for synchronous circuits at the moment, but the same is not true for analog. Most of the techniques categorized as BIST (Built- in Self-Test) and DfT (Design for Testability) are in the research phase for analog designs. It is possible to use ad hoc methods to make a number of internal lines accessible from the primary input and outputs of the IC, although doing this without decreasing the performance can be difficult for analog circuits [Nagi99]. 1.3. MOTIVATION AND PROBLEM DEFINITION 9

800

700

600

500

400

300 Testing complexity index 200 in thousands of transistors per pin

100

0 Feature size,µ m 0.25 0.18 0.13 0.1 0.07 0.05 0.035 1997 ‘99 2000 ‘05 ‘08 ‘11 ‘14

Figure 1.1: Increase in test complexity for digital circuits due to integration as predicted by the International Technology Roadmap in 1998 [Zori99]

1.3 Motivation and Problem Definition

During the begin years of the IC industry, when the level of integration did not exceed tens of transistors or a few gates, the design and testing activities were all done by people understanding the manufacturing mech- anisms as well as circuit theory [Maly98]. Today, this is not true any more. The enormous diversity of product types, manufacturing techniques and the complexity of designs force engineers to specialize, making abstractions and focusing on one level of the design and production procedure. Test en- gineers, however, do not have this luxury. The definition of an efficient test 10 CHAPTER 1. INTRODUCTION procedure for an IC requires a good understanding of the IC functionality, the manufacturing process and the test equipment. This being a difficult task, the tools available for making links between these platforms are still not completely mature and standardized. As explained in the previous section, testing of mixed-signal IC’s presents problems for all of test software, test hardware and access domains. The quality and price of mixed-signal IC’s are being heavily affected by test problems in each domain. Having stated that the test solutions are more standardized for digital circuits, the mixed-signal test community will have to come up with solutions that are compatible with existing digital methods. There is a general tendency in the EDA (Electronic Design Automation) industry to regard the design-test link as a set of tools which enable the simulation of generated tests. In the last decade, a number of EDA compa- nies have released tools that give this possibility [Bate92], [Analog]. Other researchers have developed software that is aimed at the automatic man- agement of hardware test resources [Mieg98]. An issue that has not been addressed in the mixed-signal design-test link methodology is a test gener- ation method that takes the design as starting point and makes it easier for the designer to drive the test generation process based on his design infor- mation. Another important issue is, once the test signals and measurements required are known, to check whether these are sufficiently effective with respect to the IC design and the manufacturing process. These two issues are the motivations of the research described in this thesis. In figure 1.2, an analysis of the fundamental problems in testing as depicted by Maly [Maly97] is shown. In this diagram, the two fundamental problems are identified as test bandwidth and test quality and specific challenges are pointed out for solving these problems. For mixed-signal testing specifically, other test hardware-related challenges such as noise floor and accuracy of timing measurements will have to be added to this picture. The objective of this thesis can be described as solving a part of the problem under the category ‘incomplete test set’.

To summarize, the problem treated in this thesis can be defined as devel- oping the concept of a framework linking mixed-signal IC design and test environments. This framework must enable an IC development team to . develop test programs in shorter time, 1.4. OUTLINE 11

FUNDAMENTAL TEST PROBLEMS

INSUFFICIENT TESTING BANDWIDTH INADEQUATE TEST QUALITY

INSUFFICIENT SPEED DECREASING CIRCUIT INCOMPLETE TEST SIMPLISTIC FAULT OF TESTING OBSERVABILITY SET MODELS

Tester Pin Tester Growing Too Costly Immature Test Too High Lack of Electronics Architecture Die Size DfT Generation Complexity of Understanding of Methodologies Fault Simulation Fault Mechanism

Figure 1.2: The fundamental test problems and their causes [Maly98]

. debug test programs in shorter time, . automatically generate test programs that will guarantee high IC quality. The potential obstacles and opportunities for the development of a design- test framework satisfying the above conditions will be pointed out, and so- lutions to some key problems will be suggested in this thesis. 1.4 Outline This thesis is organized as follows: Chapter 2 covers the description of the mixed-signal test problem, with discussions of present and future challenges. The chapter begins with a description of mixed-signal IC architectures. The existing industrial test methods, various steps of testing and the main criteria for each step are then subsequently discussed. After this, present IC market requirements and their implications for mixed-signal testing are analyzed. Following this, the research areas and state of the art in mixed-signal testing research are explained. This overview of the existing situation is followed by an overview of future expectations in mixed-signal IC design and IC technology, from a testing point of view. Finally, the potential of a design-test link for being a solution to some of the discussed mixed-signal test challenges is discussed. Chapter 3 focuses on the design-test link. The conditions for linking design and test are discussed for the various levels of design. The usage of sim- ulation to support the design-test link, the possibilities and challenges are 12 CHAPTER 1. INTRODUCTION pointed out. The state of the art in the various domains of the design-test link, such as virtual test, test plan generation and test program evaluation are summarized. Based on these discussions, a general framework for a design-test link is defined. A systematic data flow between design and test environments is described.

Chapter 4 presents the implementation of a design and test framework based on the concepts defined in chapter 3. The framework is meant for the design and design evaluation of mixed-signal circuits. The implementation including the design and test databases, and the software tool developed for test plan generation are described in detail. The description of the design and test of an example IC is also included in this chapter. The generated test plan, test results and discussion of experiences are presented.

In chapter 5 another part of the design-test link is discussed: the evalua- tion of the generated test plan. In this chapter, arguments are given for a defect-based test evaluation and the challenges in performing this task with analog simulation methods are explored. The existing methods for making fault simulations computationally less complex and more feasible for practical applications are described. The advantages and disadvantages of each method are pointed out, and a complexity analysis is presented for an existing method.

Chapter 6 presents a new fault simulation methodology based on DC-bias grouping of nonlinear circuits. The simulation method is meant for tran- sient simulation of nonlinear circuits, but some of the concepts described can also be applied to DC and small-signal AC simulation of these cir- cuits. First, the methods DC-bias grouping, one-step relaxation and linear equation update are described. The implementation of these methods in a prototype simulator is explained in detail. Two example analog circuits are simulated with the prototype simulator, and their results are compared with simulations using the same simulator core in a standard simulation configuration. The results show that a CPU time reduction of about 30% is possible by using the described method.

The main conclusions drawn from the work described in the previous chap- ters are presented in chapter 7, and recommendations for future research are made. 1.5. BIBLIOGRAPHY 13

1.5 Bibliography

[Analog] Saber-IC Pro website, http://www.analogy.com/Test/Apptech/ sabericpro.htm.

[Bate92] S. Bateman and W. Kao, “Simulation of an Integrated Design and Test Environment for Mixed-Signal Circuits,” in Proc. Inter- national Test Conference, 1992, pp. 405-411.

[Deby98] G. Debyser and G. Gielen, “Efficient analog circuit synthesis with simultaneous yield and robustness optimization,” in Proc. IEEE/ACM International Conference on Computer-Aided De- sign, 1998, pp. 308-311.

[Esta99] “United Nations Survey Projects Number of Net Users ,” in eMar- keter, URL:http://www.estats.com/estats/041999 un.html.

[Goel81] P. Goel, “An Implicit Enumeration Algorithm to Generate Tests for Combinational Circuits,” in IEEE Transactions on Comput- ers, Vol. C-30, March 1981, pp. 215-222.

[Kras99] M. Krasnick, R. Phelps, R.A. Rutenbar and L.R. Carley, “MAEL- STROM: efficient simulation-based synthesis for custom analog cells,” in Proc. Design Automation Conference, 1999, pp. 945- 950.

[Leen91] D.M.W. Leenaerts, “TOPICS: A new Hierarchical Design Tool Using an Expert System and Interval Analysis,” in Proc. Seven- teenth European Solid State Circuits Conference, 1991, pp. 37-40.

[Maly97] W. Maly, “SIA Road Map and Design & Test,” Oral presentation at UC Berkeley, April 1997.

[Maly98] W. Maly, H.T. Heineken, J. Khare and P.K. Nag, “Design- Manufacturing Interface: Part I - Vision,” in Proc. DATE Con- ference, 1998, pp. 550-556.

[McCl00] B. McClean, “Communications Applications to Drive Future IC Market,” in EETimes.com, URL: http://www.eetimes.com, May 2000. [Mieg98] M. Miegler, O. Kraus, H. Tauber, G. Krampl, S. Sattler and E. Sax, “Tester Independent Program Generation Using Generic Templates,” in Proc. International Mixed-signal Test Workshop, 1998, pp. 260-263.

[Nagi99] N. Nagi, “System-on-a-Chip Mixed-Signal Test: Issues, Current Industry Practices and Future Trends,” in Proc. International Mixed-Signal Test Workshop, 1999, pp. 201-211.

[Ohr99] S. Ohr, “Synthesis Proves to be Holly Grail for Analog EDA,” in EETimes.com, URL: http://www.eetimes.com, June 1999.

[Roth66] J.P. Roth, “Diagnosis of Automata Failures: A Calculus and a Method,” in IBM Journal of Research and Development, Vol. 10, July 1966, pp. 278-291.

[SIA99] International Technology Roadmap for Semiconductors, 1999 Edi- tion.

[Zori99] Y. Zorian, “Testing the Monster Chip,” in IEEE Spectrum, July 1999, pp. 54-60.

14 Chapter 2

The Mixed-Signal Test Problem: State of the Art and Boundary Conditions

2.1 Introduction

New developments in the IC industry often result in new challenges being introduced in the testing area. Technical changes in the way of designing or manufacturing IC’s usually have their impact on IC test. Advanced high-speed IC’s require even faster testers as well as new test methods to detect speed-related failures. IC’s with a higher level of integration require new DfT (Design for Testability) methods to access all embedded blocks. High-performance audio and video IC’s require more accurate measure- ment methods to be able to test their tight specifications. Besides these and other technical developments, the ever-increasing market demands on the IC production such as higher quality, lower costs, shorter development and production time have also drastic consequences on how IC’s should be tested. In this chapter, a general introduction will be presented on the current state of the art and future challenges for the testing of mixed-signal IC’s. In this context, a link will be made to the IC design and manufacturing trends and the emerging test problems.

15 16 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

2.2 Mixed-Signal IC Architectures

The architecture of a mixed-signal IC can be defined as the top-level orga- nization of analog and digital sub circuits. The sub circuits making up the IC architecture will be called analog/digital blocks. In test terminology, also the terms macros and cores are used to identify blocks. The mixed-signal IC architectures can be divided into two general categories[Aren97]:

Register-controlled analog IC’s, which consist for the most part of • analog blocks (see figure 2.1(a)). The signal path is to a large extent analog, i.e. the processing of the input data is performed mostly by means of analog circuitry. The digital blocks usually make up parts or whole of the control circuitry. Typical examples are one-chip-TV IC’s, amplifiers for audio applications, etc. [Zwem98]. Digitized analog IC’s, having digital blocks in the signal path (see • figure 2.1(b)) and analog and mixed-signal circuitry in peripheral blocks (analog-to-digital converter, ADC and digital-to-analog con- verter, DAC) and phase locked loop (PLL).

The architecture of a mixed-signal IC has direct implications on how and at what cost the IC can be made testable. The testable design and testing of register-controlled analog IC’s often creates access problems. Inserting DfT for each block to be tested can override the area and performance limitations. Multiplexers are often used to make analog blocks accessible for testing, although too many multiplexers along the signal path of a high performance IC, e.g. an audio application requiring high signal-to-noise ratio (SNR) can degrade the performance. In this kind of IC’s, a trade-off has to be found between testability and the other limitations. The second category of IC’s are less complicated to test, owing to the fact that the analog blocks are peripheral. The ADC and DAC blocks are made fully accessible by means of scan blocks at one side and primary inputs/outputs at the other. A PLL block, similarly, can be accessed by making only the output observable since it is connected to a primary input where the locking frequency is applied. By using these access blocks, the analog/mixed-signal circuitry can be accessed and tested individually. The reader is referred to [Nagi99] for an up-to-date review of the industrial testing of these standard blocks. 2.2. MIXED-SIGNAL IC ARCHITECTURES 17

Figure 2.1: Mixed-signal IC architectures: (a) register-controlled analog IC’s, and (b) digitized analog IC’s [Aren97]

Apart from the favorable accessibility of the analog blocks, the digitized analog IC’s also have the useful property that they have only a few types of standard blocks (e.g. ADC,DAC, PLL). These blocks can be generalized in terms of functionality so that general test methods per block can be developed and reused with a different configuration as often as an IC of similar architecture has to be tested. The blocks having a well-defined functionality so that they can be modeled fully for testing purposes will be called macros. Mixed-signal testing methods based on dividing the IC into macros make up the macro-based testing approach. The industrial testing of digitized analog IC’s consists almost always of macro-based testing [Flah93], 18 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

[Agra98], [Nagi99], [Engi99a]. This is less true for the register-controlled analog IC’s since the underlying blocks have diverse functionality, thereby making the definition of macros difficult [Aren97], [Zwem98]. Because of the highly variable nature of blocks and the possibility of hav- ing blocks with non-standard functions, the application of systematic test techniques to register-controlled circuits is very difficult. The methods de- scribed in this thesis are in general applicable to digitized analog IC’s. Ex- tension to register-controlled blocks can be possible for production groups where the reuse of analog macros from a library (with small modifications) is common. The issues of design and test reuse will be discussed in more detail in chapters 3 and 4.

2.3 Current Design and Test Practice

The description of the IC design and test procedures requires a certain generalization. In fact, the procedures in each development and production line depends on the type of product, test facilities at hand and the customer requirements for the particular product. It is, however, possible to draw a general line which summarizes the common patterns of mixed-signal testing in industry. This will be explained and illustrated in this section. A general overview of the design and test flow for mixed-signal IC’s is given in figure 2.2. The test steps involved are the prototype and production testing steps.

2.3.1 Prototype Test

The prototype test consists of two testing steps, design debug and design evaluation (see figure 2.3). Design debug is the first and most informal test that an IC undergoes. The designer uses dedicated measurement instru- ments to see whether the crucial functions of the IC are working correctly. At the design evaluation step, the main aim is to apply a full functional test and to measure the specified parameters. During the prototype test, the primary aim is IC characterization. Instead of a pass/fail decision, the outcome of the prototype test is a set of perfor- mance figures for the prototype IC. Decision on the required modifications is made by the designer based on these figures. Because the prototype testing 2.3. CURRENT DESIGN AND TEST PRACTICE 19

IC Specifications

IC Design and Validation

Prototype Manufacture and Testing D E V L O P M N T

Product Manufacturing T E S

Production Test

Shipping

Figure 2.2: Overview of design and test flow is performed on only a small number of IC’s, the test time is not a primary limitation. However, test choice and measurement accuracy is important since a full evaluation is required and undetected design flaws can cause delays in the whole product development process. The hardware used in design evaluation is often VXI/GPIB based computer-controlled measure- ment systems. These systems are not optimized for speed or reliability but can make high accuracy measurements. Apart from the measurements de- scribed above, evaluation test also includes batch measurements made on the production test facility on a larger number of IC’s, in order to obtain the statistical distributions of measured parameters. If some of the distri- butions obtained are not within the desired limits, then either a redesign is required or precautions are taken in the IC processing steps to ensure a better distribution. 20 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

IC Design

Prototype Manufacturing

Design Debug

NO OK? Design Modifications

YES

Design Evaluation

NO OK?

YES

To IC manufacture

Figure 2.3: Prototype test flow

2.3.2 Production Test

Production test is the common name for the test steps applied to each IC at the mass production stage. These steps are applied before and after the IC is packaged and are called test and final test, respectively (see figure 2.4). The wafer test usually consists of applying and measuring DC and low-frequency AC signals. These are mostly general functional and parametric tests, or tests in which e.g. the connections of power lines are checked. It is often not possible to apply high frequency tests or tests which require very accurate timing measurements because of the insufficient contact characteristics of the wafer probe pins. These kind of tests are applied at the final test stage once the IC is packaged. During the wafer test, the dies which are defective are marked. The dies that pass the wafer test are then packaged and the final test is applied. At final test, the bonding connections are checked, the digital test patterns are run, and key analog specifications are measured. In some cases, the tests that have already been performed during wafer test can be skipped in the final test in order to decrease the test time. 2.3. CURRENT DESIGN AND TEST PRACTICE 21

Wafer Test

NO OK?

YES Packaging

Final Test

NO OK?

YES

Shipping

Figure 2.4: Production test flow

There are differences in the requirements and aims for prototype test and production test. First of all, prototype test is focused primarily on design weaknesses rather than on manufacturing defects, while the main aim of production testing is to detect the defects resulting from the imperfections of the manufacturing process. Secondly, the test time is an important is- sue in production test since it contributes to the costs of each IC. Another difference is the characterization approach in prototype testing in contrast to the pass/fail decisions in production testing. And lastly, there is a large difference between the equipment and software used for prototype test and production test (except for the batch measurements during design evalua- tion). In high-volume testing expensive industrial testers are used which are often faster and considered to be more reliable than the VXI-based modular measurement systems. Tester reliability is more important in production testing since a possible breakdown will bring the production line to a stop which is highly undesirable. These testers can perform measurements at speeds required in mass-production testing, nowadays also equipped with DSP and test debugging software. The test software delivered with in- dustrial testers contains standard DSP functions such as FFT. The main 22 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

Feature Prototype Test Production Test Focus on Design correctness Manufacturing defects Test time Not important Important Test approach Characterization Pass/fail Equipment VXI/GPIB based Industrial ATE

Table 2.1: Differences between prototype and production tests differences between prototype and production testing are summarized in table 2.1. Besides the tester, another piece of hardware that also plays an important role in the testing of an IC is the Device Interface Board (DIB) (also called the loadboard). This is the board that forms the interface between the tester and the IC being tested. The DIB consists of extra circuitry such as decoupling and load elements which are required to maintain the correct interaction of tester channel with the IC under test. To have an idea about the efficiency of the mixed-signal IC design and test flow, it is necessary to look at the sequence and interrelations of the involved tasks in terms of the time they take. Before doing this, the test- related tasks and concepts will be defined here.

Definition 2.1 The description of the stimuli and measurements for a spe- cific test method for one macro, written in a test description language that can drive the test equipment directly is called a test function.

Definition 2.2 The complete sequence of test functions for an IC, written in a test description language that can drive the test equipment directly is called a test program.

A test program is complete in the sense that it is directly executable on the test equipment without other additions. Thus, except for test functions, also commands for the configuration of the tester, inputs for the test control blocks of the IC, and other steps necessary for a complete automatic IC testing process are also included in the test program. 2.3. CURRENT DESIGN AND TEST PRACTICE 23

Definition 2.3 The high level description of the test program is called a test plan.

The terms test function, test program and test plan correspond to levels of abstraction for the representation of tests. These representations can exist for prototype testing as well as production testing. For analog and mixed- signal blocks, the test plan is either a document describing test parameters such as test signals, time length, measurement type etc. or another higher level representation of the test program. An example of a test plan for prototype testing will be given in Chapter 4. The term test development (see figure 2.2) will be used throughout this thesis to correspond to the total test preparation process starting from the basic specifications and structure of the IC, and ending in a test program. In figure 2.5 the timing of test development activities with respect to the IC development milestones is given. The design activities are not explicitly presented, only the milestones reached are given in order to sketch the time relation between IC design and test. The milestones are based on a general top-down design approach. The interdependencies of the tasks can be seen in the figure. The digital test patterns are generated when the gate level design is complete. The test generation for analog macros begins after or shortly before the analog macros are completely ready. Test program preparation corresponds to the making of a complete test program from the generated test signals and vectors. The design of interface circuitry is also part of the test program preparation process. When the prototype is ready, the test program and the DIB are debugged using the prototype. The prototype test preparation and prototype testing proceeds in parallel with the production test activities. The generation of the prototype test has to be completed before the prototype is ready, although this is not always the case in practice. Design debug and evaluation is done (partially) in parallel to the debugging of production tests. The additional time costs due to testing steps is shown with black arrows in figure 2.5, denoted as ‘IC TTM bottleneck’. TTM is the acronym for time- to-market, and is a parameter for the speed at which the product is brought to market, and is defined as the time between specification of a new product and the time it is brought to market. TTM is a very important factor in terms of industrial competition. The TTM contribution of testing tasks 24 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

Prototype test Prototype test Debugging planning preparation evaluation

Analog test generation Production test Test Test program planning Digital test program debugging Testing generation preparation

TIME

IC specs Behavior- Analog Gate-level Analog Placement Pre-production Mass production level cell digital cell and Routing design design design layout

Fully automated process IC development milestone Partially automated process Non-automated process IC TTM bottleneck

Figure 2.5: Timing of test activities with respect to IC development will be discussed in more detail in section 2.4.3. It suffices to say at this point that especially the debugging of prototype/production test programs often makes up a bottleneck for the speed at which mass production can begin.

2.4 Market Requirements

2.4.1 Quality

IC quality is defined as the number of shipped products that do not satisfy the specifications divided by the total number of shipped products [Aren97]. The yield of the manufacturing process, the effectiveness of the tests used and the accuracy and reliability of the testing equipment are the factors that influence quality. In the IC industry, quality is measured in PPM (parts per million), i.e., the number of IC’s that fail to satisfy the specifications 2.4. MARKET REQUIREMENTS 25 per million shipped IC’s. Another term in production that is related to quality is the defect level (also measured in PPM).

Definition 2.4 Defect level is defined as the ratio of the number of shipped IC’s that do not function as specified to the total number of shipped IC’s.

Throughout this thesis, defect level will be used to denote the quality of an IC. The aim of this section is to present the relation between the IC quality and the effectiveness of the corresponding test program. First some basic definitions related to IC manufacturing will be presented.

Definition 2.5 The number of fault-free dies produced divided by the total number of dies is called the process yield.

Y = Number of fault-free dies / Total number of manufactured dies

Definition 2.6 For a given production test step, the number of IC’s that pass the test divided by the number of IC’s that have been tested is called the test yield of that test step.

Ytest = Number of IC’s that passed the test / Total number of IC’s that have been tested

Definition 2.7 Any deviation in the electrical or geometrical properties of the manufactured IC from the values given by the IC layout beyond the expected process variation is called a defect.

Definition 2.8 The effect of a defect on the electrical characteristics of the IC deviating from the specified behavior is called a fault.

In other words, a fault is the consequence of a defect, but it is also possible that a circuit with a defect has electrically no fault at all. 26 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

Definition 2.9 Let = f1, f2, ..., fn be the set of all n faults which are considered for an integratedF { circuit . }Let be a test program for , which C T C is able to detect a set of faults d, where d and d = m. The ratio m F F ⊂ F | F | n expressed as a percentage is called the fault coverage of for based on the fault set . T C F The main aim of IC testing is to keep the defect level as low as possible. In the case of an ideal test, all possible defects are detected and the defect level becomes 0. A real test can deviate from this ideal case in two ways: either by failing fault-free IC’s (type 1 testing error), or by passing faulty IC’s (type 2 testing error)[Will90]. Having a large number of type 1 errors causes the IC costs to increase because of throwing away fault-free products. Having a large number of type 2 errors, on the other hand, causes defective IC’s to be delivered, i.e. the defect level becomes high. Given an IC and the test program written for this IC, the probability of getting type 1 and type 2 errors depend on separate factors. Type 1 error probability is a function of tester quality [Will90] and test limits but not of the test method used, so this error type will not be discussed here any further. The probability of type 2 errors, on the other hand, depends on the fault coverage of the applied test program and the process yield. In other words, the main aim of writing an effective test program is minimizing the probability of type 2 errors. Assuming that there are no type 1 errors, and that all testing is done at one step, the probability of type 2 errors is equal to the defect level; since they both correspond to the probability of a defective IC passing the test. Although these assumptions do not hold in reality, the relationship between defect level, fault coverage and testing errors is still a strong one. This relation has been investigated by several researchers [Will81], [Will90]. The simplest and most popular model of this relation has been presented by Brown and Williams [Will81]. This model gives the relation between defect level, process yield and fault coverage as

(1 T ) DL = 1 Y − (2.1) − where DL is the defect level, Y is the process yield and T is the fault coverage. This model is based on the assumptions that

the set of all possible faults is finite and known beforehand, • 2.4. MARKET REQUIREMENTS 27

all faults are equally probable and independent, • the probability of type 1 errors is zero. • Although the above assumptions are not realistic for an accurate estimation of the defect level, the relation presented in equation 2.1 gives a rough idea about the nature of the relation between product quality, process yield and the fault coverage of the test program. In figure 2.6, this relation is shown for a range of yield values. It can be seen in this figure that for a given process yield, a high fault coverage is required for a low defect level.

1200

1000 Yield = 98%

800 Yield = 98.5%

600

Yield = 99%

Defect Level, PPM 400

Yield = 99.5% 200

Yield = 100% 0 95 95.5 96 96.5 97 97.5 98 98.5 99 99.5 100 Fault coverage, %

Figure 2.6: The relation between process yield, defect level and fault cov- erage

The described relationship between process yield and fault coverage has not yet become a standard basis for the testing of analog and mixed-signal blocks. The methods that make the link between manufacturing process 28 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM defects and testing are called ‘defect-oriented testing’ (DOT) methods. The work related to DOT methods for analog blocks has so far been limited to academic institutions, and only recently a few examples of industrial experiments have been published [Xing98], [Beur99]. It remains, however, a fact that a realistically obtained fault coverage figure is the only accurate way of optimizing the test effectiveness and, in many cases, decreasing the test time by preventing overtesting. With the increasing market pressure for lower defect levels and lower IC costs, it is expected that the usage of analog tests with high fault coverage will have to increase in future.

2.4.2 Cost

IC costs can be divided into the development and production costs of the IC. Test development costs are included in the IC development costs, and cost of testing an IC constitute a part of the production costs. In gen- eral, test development costs depend on the time spent on test generation, prototype testing and production test debugging. These issues are also re- lated to TTM, which is even more important than cost at the development phase. The cost of production testing an IC has, on the other hand, direct implications on the IC costs because the production test is applied to each IC. Therefore, these costs will be discussed in this section. The costs of production testing depend on the following factors [Engi99a]:

the tester used, • test yield (see definition 2.6), • test time of a fault-free die, • average test time of a faulty die, • additional loading/handling times. • In general, for a fault-free die, the IC test cost is given by the costs of running the tester per unit time multiplied by the total test time per IC. However, for a given industrial test facility, consisting of ATE and prober or handler, and a given test yield, the manipulatable test cost parameters are the test time of a fault-free product and the average test time of a faulty product. For the average cost of testing a die, the parameters test yield and 2.4. MARKET REQUIREMENTS 29 average test time of a faulty die must also be taken into account. The test time of a fault-free die is the total time the complete test program takes to measure and process measurement data. The fact that the test time of a faulty die plays also a role implies that the tests should be ordered according to their success in detecting defects. It has been suggested [Milo94] to use the fault coverage for making an optimal ordering of the tests to decrease this test time. In general, a large amount of the test time of a mixed-signal IC is dominated by the tests for the analog parts. IC’s where analog blocks making up 10% of the IC circuitry account for 60% or more of the test time are not uncommon [Nagi99],[Engi99a]. For this reason, the compaction of analog tests is an important issue in mixed-signal IC testing. However, it should also be noted that wafer loading or package handling times are also important parameters for test time. In the situations where the handling times are comparable with the test time, decreasing the test length will obviously not be sufficient. In these cases alternative solutions such as multisite testing [Evan99] have to be applied, in order to make use of (a part of) the handling time to test another IC. The remaining factor, the test yield, depends on the process yield and the presence of test and measurement errors. Improving the process yield is be- yond the scope of this thesis; however, design for manufacturability (DfM) techniques can be used with the results of fault extraction to decrease the probability of some faults. The probability of test errors can be decreased by acquiring a good fault list for the IC and assessing fault coverage figures for the tests as explained in the previous section. It becomes clear from the discussion of cost and quality issues of an IC that having tests with high fault coverage can improve both the quality and the cost figures. On the other hand, the accuracy of the estimated fault coverage highly depends on how realistic the fault set at hand is. At present the fault models [Soma96a] for analog parts are not satisfactory for basing PPM estimates on, and extensive research is needed in this area in order to make use of fault-based techniques in quality and cost improvement. Another test-related factor that contributes indirectly to the cost of an IC is the area contribution of the built-in self-test (BIST) and design for testability (DfT) circuitry. In semiconductor industry, the opinions are divided on this subject. In fact, adding BIST and DfT in an efficient way can help decrease the test development and debugging costs, thereby 30 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM compensating for at least a part of the additional area cost. Studies of costs of these methods have been made, although these estimation methods take digital IC’s into account such as in [Chen99]. For mixed-signal IC’s a generalized framework for estimating this kind of costs does not yet exist, except for the ‘home-grown’ IC cost models of individual manufacturers [Engi99a]. Finally, on-line debugging of tests has also often been pointed out as a cost factor [Bate92], because of the high costs of operating the expensive ATE. For this reason, research into ‘first-time-right’ tests and test simula- tion software employing tester models has gained momentum. Some ATE manufacturers supply debugging software for off-line debugging, but the debugging of the device under test (DUT) with the DIB is still performed for a large part on-line. Standardization in terms of tester and interface board models have not been achieved yet for mixed-signal circuits.

2.4.3 Time-to-Market

Experience shows that roughly one third of the total manpower in the design and test trajectory is spent on test development, programming and debugging [Brem98]. The effect of test activities on TTM is actually more important than given by this figure, because most of this time is spent at the end of the design cycle, when relatively little time is spent on design and evaluation activities, and test debugging dominates. As far as the mixed-signal blocks are concerned, the test issues that are most related to TTM are the development and debugging of test programs. The general tendency for mixed-signal parts is to start test development once the design and sometimes even the prototype is ready [Kao 92]. A large part of the debugging is done normally on the tester using the IC prototype [Engi99a]. Test programs with less complexity and methods that generate first-time-right test programs can help to a degree to decrease the contribution of test debugging to TTM. However, this is a very complicated problem for mixed-signal circuitry, since analog effects from the interface circuitry such as noise and loading, grounding problems and similar effects are very difficult to represent with a generalized model. Simple program bugs or errors related to the configuration of the ATE can, however, be prevented by (semi-)automating the test development process. 2.5. MIXED-SIGNAL TEST: CURRENT ISSUES AND CHALLENGES 31

2.5 Mixed-Signal Test: Current Issues and Chal- lenges

The pace of change in the field of analog testing has not been comparable to its digital counterpart. The main development until now has been namely in the area of test hardware and DSP methods, while work done in subjects like test generation, fault modeling and diagnosis has failed to converge to standard applications. Here, the state of the art in the main mixed-signal test issues will be summarized briefly.

2.5.1 Test and Measurement Environments

The most important developments in the testing of mixed-signal IC’s has been the developments in the ATE hardware and software. The accuracy, noise figures and speed of ATE has developed very fast in the last decades, but these issues are expected to remain challenges for ATE designers in the future [SIA 99], [Groc97]. The usage of DSP-based analysis methods and coherent testing in mixed- signal ATE has been an important turning point for mixed-signal testing. Coherent testing relies on the full synchronization of analog and digital parts in the test equipment and is the basis of dynamical measurements in modern ATE. Roughly stated, the coherency principle is based on choosing the number of samples of the measured periodic signal N, with respect to the number of periods measured M, such that M and N are two relatively prime integers [Maho87]. Coherent testing has many advantages. For one thing, the test speed can be governed by M rather than by the speed of the device under test (DUT). Also, the accuracy of dynamic measurements, which are very important for the testing of mixed-signal blocks, increases [Groc97]. In general, the DSP methods offer a level of flexibility and repeatability which would be impossible with conventional analog measurements. The analysis of captured data in the digital domain has also changed the mea- surement equipment capabilities substantially by making e.g. the concept of ‘virtual instruments’ possible. Virtual instruments are measurement op- tions in the form of software-emulated instruments. 32 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

As to ATE software, the major advances of the recent years is the introduc- tion of the ‘virtual test’ concept. Virtual testing is the off-line verification and partial debugging of test programs in a simulation environment. Such an environment consists of tester models, a circuit simulator and possibly debugging functions in order to verify the working of the developed test program before the prototype is produced. The aim is to make test debug- ging faster and optimistic results are reported in terms of TTM reduction [Einw98]. At present, most industrial ATE is delivered with some virtual test and debugging capability.

2.5.2 Design for Testability (DfT)

The work done on mixed-signal DfT methods during the last decade has focused on

developing a mixed-signal extension to the IEEE boundary scan stan- • dard (IEEE 1149.1) to enable access to inputs and outputs of analog blocks [Park93], [Sunt95], [Whet96],

evaluations on the performance of the mixed-signal test bus [Lofs96], • other DfT structures for enabling testing of analog blocks. • The mixed-signal extended version of the IEEE 1149.1 standard is called IEEE 1149.4 Mixed-Signal Test Bus and is recently approved as an official IEEE standard. The mixed-signal test bus IEEE 1149.4 is slowly starting to be used by the semiconductor industry. A reason for this limited use is that the standard is actually meant for board level test, so it is not suitable for most IC-level tests. Most of the crucial tests for analog and mixed-signal modules are dynamic and high speed, which fall outside the application domain of IEEE 1149.4. The standard will require a few years before the industry is fully aware of its possibilities for board level test, and IC level usage for static tests may follow once IEEE 1149.4 becomes a common DfT method for mixed-signal IC’s. The rest of mixed-signal DfT research focuses on developing configurable analog blocks (mostly operational amplifiers [Brat95], [Soma90], [Reno98]) for making access to embedded circuitry possible. Current-based testing 2.5. MIXED-SIGNAL TEST: CURRENT ISSUES AND CHALLENGES 33 has shown to have a high fault coverage for the testing of ADC’s , and another method has also been described [Lamm98] for testing a complete analog part of an IC by monitoring the proportions of various source lines. DfT structures for high-frequency circuits is also becoming a focus point. Recently, a DfT structure employing undersampling has been suggested and measurements at frequencies such as 1.1 GHz have been reported [Maso99]. The acceptance and application of the research results on mixed-signal DfT is very careful and slow in the IC industry. In most of the mixed-signal IC’s with a digitized analog architecture, the analog blocks are peripheral and access problems are solved to a great extent by using scan cells at the border of digital circuitry as done in [Flah93]. For register-controlled analog architectures, access remains a problem, since the IC developers are often not very willing to insert DfT structures inside their analog circuitry because of the risk of degrading the performance, especially for high fre- quency and high performance circuitry. At present, the industrial practice for mixed-signal DfT is still to use analog switches/multiplexers and digital scan cells to reach analog/mixed-signal blocks.

2.5.3 Built-in Self-Test (BIST)

Built-in self-test is the common name for self-test circuitry on the IC. The greatest motivation behind analog and mixed-signal BIST is attaining the speed and quality which is not possible or very expensive to get by using external ATE. In general, the usage of BIST structures for analog on-chip test is aimed at improving the test costs and quality by [Sunt97]:

Decreasing the test time/costs, • Making it possible to test a mixed-signal IC on digital ATE • Making more accurate and at-speed measurements possible without • suffering from the ATE limitations and available ATE for different test phases (chip, board, system, etc.).

As for test development and debugging time: test development time is typically included in the design time of BIST. The reuse of configurable BIST specialized on one type of analog macro makes it possible to minimize 34 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM design time for BIST. Test debugging is in this case not necessary, assuming that the BIST covers all the test requirements for the block. The research in mixed-signal BIST has a long history. There have been a large number of BIST designs developed for generating standard test stim- uli on-chip, for carrying out standard tests for often-used blocks such as ADC and DAC, and test output processing techniques such as signature analysis. Among the existing mixed-signal BIST methods, important ex- amples are HBIST [Ohle91], MADBIST [Tone96], HABIST [Fris97] and OBIST [Arab97]. HBIST is a global BIST scheme for the whole IC, and includes both self-test modes both for digital and analog blocks and also a mode for on-board interconnect testing. HABIST, OBIST and MADBIST consist of methods for testing analog blocks alone. HABIST uses histogram methods to produce a signature of the activity at the selected nodes of the tested block. OBIST is based on a scheme which makes all analog blocks act like oscillators in the test mode, check the oscillation frequency and decide on whether the blocks are faulty. MADBIST is a specification-based on-chip testing scheme, including on-chip test stimulus generation and mea- surement circuitry for measuring typical specifications of standard analog blocks (i.e. harmonic distortion of an ADC). In the recent years, commercially available mixed-signal BIST has appeared on the market. The mixed-signal BIST solutions from LogicVision [EET 97] consist of BIST designs specializing in two of the most frequently tested analog macros, the PLL [pllBIS] and the ADC [adcBIS]. Using these test modules, it is possible to run a number of standard functional pass-fail tests [EET 97], where the operation limits are either hard coded in the BIST circuitry or entered by the user. Another option is the BISTMaxx suite from Fluence [BISTMa], which is based on a set of tools including OBIST and HABIST described in the previous paragraph. BISTMaxx includes hardware structures to be used on-chip and software tools to enable BIST configuration and the generation of supporting test algorithms. 2.5. MIXED-SIGNAL TEST: CURRENT ISSUES AND CHALLENGES 35

2.5.4 Automatic Test Program Generation (ATPG)

Automatic generation of test programs for analog and mixed-signal circuits is traditionally macro-based and functional. Testing in this sense corre- sponds to the measurement of a set of specifications and usage of predeter- mined specification tolerance boxes for making the pass/fail decision. This approach is called functional ATPG [Soma96b]. In recent years, however, the large scale of integration and the high demand for quality has been forcing another direction in analog test: generating and evaluating tests based on the circuit structure (read: netlist) and the set of potential faults, thus creating the structural ATPG approach. While functional test development methods are practical for developing tests for design evaluation, it is difficult to tell whether they are actually efficient. This is because no well-defined correlation exists between speci- fications and process defects, and thus it is not possible to tell that a test that checks the specification also checks a large amount of possible defects that can occur in the production. This is a disadvantage when test time is valuable and one wants to use it as efficiently as possible in order to decrease the defect level. These arguments make structural test generation attractive especially for production testing. However, analog structural ATPG is still in a very primitive phase. It is practically impossible to de- velop a method that can generate tests based on any netlist. The reasons for this have been discussed in section 1.2.2.1. Because of this difficulty, most of the structural methods constrain themselves to (combinations of) few types of inputs (mostly sine, ramp, step, etc.) and standard AC or DC measurements [BenH96], and use various methods and criteria to select an effective set of these inputs. With respect to the selection method used, structural ATPG methods can be categorized as:

Sensitivity analysis based methods: Sensitivity analysis makes use of • a sensitivity matrix, which consists of elements each of which relates the derivative of an output with respect to a circuit parameter, i.e., ∂v S = (2.2) ∂p where S is the sensitivity matrix, ∂p is the vector of perturbations in the parameter values pi, and ∂v is the vector of corresponding 36 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

changes in the output voltage values. The sensitivity matrix can be used as an accurate relation between the circuit operation and the parameter values, however, for large circuits can lead to large simulation times. The sensitivity matrix approach has been used in many works, a good example is [BenH96], where sensitivity analysis has been used to select test stimuli values for a predefined set of input types (voltage level for DC, frequency for AC, etc.). This work takes the advantage of sensitivity analysis to generate tests also for parametric faults by calculating minimum and maximum values for element deviations. The test generation is done by taking each fault in the fault dictionary into account and calculating the best AC and DC test inputs for detecting the fault.

Linearity based methods: These methods are based on the linear sys- • tem theory and are applicable to systems which operate in their linear regions in both fault-free and faulty state. In [Bali95] and [Pan 97] examples of such ATPG methods are given. The main disadvantage is that it can not be assumed in general that the faulty circuit will still be operating in its linear region. It can not even be assumed that the number of states will remain the same for the faulty circuit. The only application area of these methods is possibly detecting the soft faults in linear circuits.

Fault simulation based methods: These methods rely on fault simu- • lations in order to choose and configure test stimuli out of a number of predefined signals. In [Kaal96] an example of such an ATPG tool is given. The main disadvantage of this type of tools is the large simulation time needed to simulate each fault. Development of faster fault simulation methods (see chapters 5 and 6) can make such ATPG more feasible.

In general, the mixed-signal ATPG methods are not used in the industry yet. The use of the circuit structure for test evaluation of functional tests seems more convenient for most IC’s, although there will certainly be a place for ATPG in augmenting the standard tests for increasing the fault coverage or reducing test time. 2.5. MIXED-SIGNAL TEST: CURRENT ISSUES AND CHALLENGES 37

2.5.5 Test Program Evaluation

In general, a test program can be evaluated in terms of test cost and test effectiveness. The relationship between fault coverage as a test effectiveness measure and IC quality has been discussed in section 2.4.1. Since there are infinitely many possible defects that can cause a fault in the IC, the definition of fault coverage given in section 2.4.1 is not complete without the accompanying fault list and the fault model for each fault in the list. Because it will never be possible to model and list all the faults in a circuit, the most common defects for a given process have to be selected and modeled at circuit level or a higher abstraction level. The modeling of the effects of common failure modes at circuit or higher level is called fault modeling. The test effectiveness analysis done by means of injecting fault models in the circuit and simulating the test stimuli and pass/fail checks in order to obtain the fault coverage is called fault simulation. The existing fault models can be categorized as structural, parametric and behavioral fault models [Soma96a]. Structural fault models are applied at component level and change the topology of the circuit, such as an extra re- sistance between two circuit lines. The most commonly used structural fault models for analog circuits are the opens and shorts/bridges. However, the simulation of faults at this level is very CPU-intensive [Xing98], [Soma96a]. Research is focusing on behavioral fault models at analog/mixed-signal macro level, in which approach the defective behavior is modeled in terms of deviations of macro behavior from specifications. The main problem with this approach is the link with physical reality: the fault model gives an idea about which failures can be expected for a general macro, and the specific layout and manufacturing process characteristic information are not included in the analysis. Parametric fault models correspond to those defects whose effects can be represented by the change in the value of one or more circuit parameters without changing the circuit topology. The simulation of these faults is done using sensitivity-based methods and it is less CPU-intensive than the simulation of structural faults [Soma96a]. At present, analog fault simulation is not used in the IC industry except for a few cases, such as described in [Xing98]. However, if the cost and quality of mixed-signal IC’s are to be kept up with the market expecta- tions, the usage of defect-oriented methods will be inevitable in the future. For this reason, new faster analog/mixed-signal fault simulators and/or 38 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM other defect-oriented test approaches have to be developed. More on test evaluation and analog fault simulation will be discussed in chapters 5 and 6.

2.6 Trends in Mixed-Signal IC Design

The integration level of mixed-signal IC’s are showing a steady increase. Every product generation sees more functionality on the same IC, including in the recent years also memory and RF circuitry. More integration means deeper embedded blocks and more access problems, since the number of pins does not increase with the same speed as the number of components on the IC. Higher density and more layers also cause the internal noise and crosstalk between the IC lines to increase. It is suggested in [Atwo99] that the integration of high-speed digital and analog blocks on the same IC creates also embedded , because of the circuitry that has to be built around the analog blocks in order to protect them from the noisy effects of the digital circuitry. A number of new design developments make the new generations of IC’s more and more prone to internal and measurement noise. The low-power IC’s operating at supply voltages of 2.5V-3V are common at present. The trend of dropping supply voltage levels are expected to continue with the fu- ture manufacturing technologies [SIA 99]. In the IC’s with low supply volt- age, the electromagnetic coupling noise caused by the operation of digital parts can negatively influence the testing of mixed-signal parts [Nagi99]. On the other hand, the continuously increasing demands on the performance of mixed-signal IC’s require very low-noise measurements. Parameters such as SNR (signal-to-noise ratio) for high resolution data converters have to be obtained using equipment with a very low noise floor. Similarly, the low signal power measurements for testing RF circuits have to be performed us- ing low-noise equipment. These developments push both the test hardware performance such as lower noise floors [SIA 99] and also the development of new probing and measurement methods. Another important trend comprising a test challenge is the increasing IC clock speed figures. Accurate measurements on high resolution and high speed ADC and DAC blocks require high ATE sampling rates and tim- ing accuracy. For example, testing a PLL in a high-speed IC involves the 2.6. TRENDS IN MIXED-SIGNAL IC DESIGN 39 measurement of PLL jitter at locking frequencies. Testing of general pur- pose high speed transceivers such as FireWire (IEEE 1394) is done at the high data rates and low transmit/receive levels for which the buses are designed[Nagi99]. This kind of high performance circuitry requires testers with very high speed, low noise and low jitter figures. The design of tester interface and probes also becomes a challenge. These factors make on-chip testing a favorable alternative for the new generations of mixed-signal IC’s. The emergence of IP (intellectual property) cores and core reuse in the re- cent years is also transforming both design and test methodology in IC’s. The testing of IP-based designs creates a demand for new system-level test development and test reuse methods. At present, the most commonly used IP cores are memories [Zori99]. Digital cores (e.g. [ARM]) are also used in increasing numbers, and analog/mixed-signal cores [Ohr 99] are also more and more available in recent years. At the digital front, the presence of standards such as VHDL behavior description language and STIL test de- scription language [Tayl98] and scan-based designs make it easier to develop correct tests in a systematic fashion. The digital cores are delivered with the set of test vectors, and peripheral cells around the core such as wrappers are used to translate the core-level tests to system level tests and include them in the test program [Harr99]. Standards for access and test formats are being developed by IEEE P1500 Embedded Test Standard Group [P1500]. For the mixed-signal and analog cores, however, systematic design and test reuse and standardization are still in an infant stage [Fang00]. The lack of standard behavior and test description languages as well as standard DfT and BIST structures makes it practically impossible to define a standard for core test. Another obstacle is that analog circuitry has much more inter- action with the surrounding blocks (i.e. loading, frequency behavior, etc.) resulting in that the analog cores are usually inserted after some changes and tuning [Bass95], so they can often not be delivered (as done with the digital cores) with a fixed set of ready-to-apply tests. The systematic reuse of analog tests will be essential in future, when the mixed-signal cores be- come more common and the time limitations for test development become tighter. Research into mixed-signal DfT and analog test description lan- guages, as well as the mixed-signal extension of standards such as STIL are required to meet these challenges. 40 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

2.7 Trends in IC Technology

The dimensions of the IC technologies are becoming smaller and smaller, which has in turn effects on manufacturing and testing. The IC’s having transistors with a gate length of 0.18 µm are already being produced, and this is expected to decrease to 0.12 µm in 2001. Technologies with smaller feature sizes are already in research phase [Bell99]. Smaller dimensions bring along smaller distances between signal lines, thus requiring more focus on the testing of noise and crosstalk effects. The smaller feature sizes mean also that the IC’s become more defect- sensitive. A defect with a given size is more likely to cause a fault in an IC of smaller technology. Soft faults which would be negligible in technologies with larger feature sizes can degrade the performance of the IC. It is also in general true that the process control is becoming more difficult with the increasing miniaturization, thus defects causing soft faults become all the more likely in new technologies. ‘Invisible’ defects can occur in submicron IC’s which can not be identified by optical methods, thus fast and correct methods for electrical fault diagnosis will be needed [Maly96]. To sum up, new IC technologies require:

well-defined links between test and process defects, • fast and accurate fault diagnosis methods, • fault modeling of crosstalk effects, • generation of tests for efficient detection of soft faults. •

2.8 Design-Test Link

In the previous sections, a general picture of the current situation and challenges for mixed-signal testing have been discussed. The points below summarize a few of the main mixed-signal challenges discussed:

the development and debugging time must be decreased for both pro- • totype and production test, 2.8. DESIGN-TEST LINK 41

the link between processing defects and test has to be well-defined in • terms of accurate but simple fault models, fast fault simulation algorithms are needed in analog and mixed-signal • domain.

With respect to the first point, the solution in digital design and test has already been found by making design and test development one single in- tegrated flow. With the development of high-level description languages, automatic synthesis methods, simple fault models and ‘complete’ ATPG methods, it has been possible to combine the initial specifications, design, test and manufacturing at various levels of abstraction, leading to a top- down design methodology with the automatic DfT insertion and test gen- eration steps included. An important question is, can we do something similar in the analog domain? The same concepts that are applied to the digital IC’s can obviously not be readily applied to the analog circuits. However, there are a few methods that make a structured flow more possible. The most important of these is reuse in design and test. The analog design and test teams reuse their work routinely, however, this still takes place in the existing ‘sequential’ approach to design and test, meaning that test is seen as a ‘routine end step’ after design and production. More structured reuse methodologies can increase the efficiency of design and test by increasing the work done in parallel. This parallelism can be achieved by means of formalizing the way design and test data are stored and exchanged. Employing a systematic link between design and test at various levels of abstraction in order to improve the efficiency of design and test activities, will be called design-test link. This approach is aimed at solving two main issues in mixed-signal design and test:

not enough parallelism is employed between design and test develop- • ment, the data transfer between the design and test is not well-defined and • thus error-prone, making test bugs, under- and overtesting likely.

A framework developed for the integration of prototype design and test development for mixed-signal IC’s will be described in chapters 3 and 4. 42 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

2.9 Conclusions

A general view of the state of the art in mixed-signal testing, the current mixed-signal test paradigm and future challenges have been discussed in this chapter. The future challenges discussed can be separated in two general groups. The first group consists of challenges dictated by the technical developments in IC design and technology, such as the need for higher test speed, higher accuracy, new BIST and DfT structures, new fault models, etc. Most of these issues require solutions in terms of IC test structures or tester hard- ware, and will not be central in this thesis. The second group consists of market-dictated challenges, namely the demand for lower cost, higher qual- ity and shorter TTM. It has been explained in the context of this chapter that the pressure for shorter TTM increases the demand for development and production environments where test development and debugging cost less time. The requirements on quality and cost, on the other hand, put the stress on better evaluation and optimization of test programs. Investigation of possible solutions for the second group of challenges will be the subject of this thesis. In section 2.8 the potential of design-test link solutions for the minimization of test development and debugging time has been discussed and the aims for this approach have been set as increasing the design-test parallelism and making design and test data exchange more systematic. Methods for combining design and test flows together to find a solution to the TTM problems will be discussed in chapters 3 and 4, with a solution suggested in chapter 4 for prototype testing. The quality challenges will be addressed in chapter 5 and 6 where the problem of simulating faulty circuits at feasible computational costs will be attacked.

2.10 Bibliography

[adcBIS] adcBIST website, http://www.logicvision.com/solution/adcbist. htm.

[Agra98] V.D. Agrawal, “Design of Mixed-Signal Systems for Testability,” in Integration: The VLSI Journal, Vol.26, pp. 141-150, 1998. 2.10. BIBLIOGRAPHY 43

[Arab97] K. Arabi and B. Kaminska, “Oscillation Built-in Self-Test (OBIST) Scheme for Functional and Structural Testing of Analog and Mixed-Signal Integrated Circuits,” in Proceedings of Inter- national Test Conference, 1997, pp. 786-795.

[Aren97] R.G.J. Arendsen, Defect Based Test Selection for Large Mixed- Signal Circuits, Ph.D. Thesis, University of Twente, April 1997.

[ARM] ARM website, http://www.cambridge.arm.com/.

[Atwo99] E.R. Atwood, “Analog Fault Simulation: Need It? No. It is Already Done,” in Proceedings of International Test Conference, 1999, pp. 649.

[Bali95] A. Balivada, J. Chen, J.A. Abraham, “Efficient Testing of Lin- ear Analog Circuits,” in Proc. International Mixed Signal Test Workshop, 1995, pp. 66-71.

[Bass95] G. Bassak, “Focus Report: Analog and Mixed-Signal Cells,” in Integrated System Design Magazine, Focus Report August 1995, URL:http://www.eedesign.com/EEdesign/FocusReport9508.html.

[Bate92] S. Bateman and W. Kao, “Simulation of an Integrated Design and Test Environment for Mixed-Signal Circuits,” in Proc. In- ternational Test Conference, 1992, pp. 405-411.

[Bell99] “Revolutionary Transistor Design Turns the Silicon World on End,” Bell Labs Press Release, Nov. 15, 1999.

[BenH96] N. Ben Hamida, K. Saab, D. Marche, B. Kaminska and G. Ques- nel, “LIMSoft: Automated Tool for Design and Test Integra- tion of Analog Circuits,” in Proc. International Test Conference, 1996, pp. 571-580.

[Beur99] R.H. Beurze, Y. Xing, R. van Kleef, R.J.W.T. Tangelder and N. Engin, “Practical Implementation of Defect-Oriented Testing for a Mixed-Signal Class-D Amplifier,” in Proc. of European Test Workshop, Constance, Germany, May 1999, pp. 28-33.

[BISTMa] BISTMaxx website, http://www.fluence.com/bistmaxx/BISTMaxx ProductDescription.pdf. 44 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

[Brat95] A.H. Bratt, A.M.D. Richardson, R.J.A. Harvey and A.P. Dorey, “Design-for-Test Structure for Optimizing Analogue and Mixed- Signal IC Test,” in Proc. European Design and Test Conference, 1995, pp. 24-33.

[Brem98] Harry Bremer, private communication, Philips Semiconductors, Nijmegen, The Netherlands. August 1998.

[Chen99] T. Chen, V.-K. Kim, and M. Tegethoff, “IC Manufacturing Test Cost Estimation at Early Stages of the Design Cycle,” in Micro- electronics Journal, Vol. 30, Year 1999, pp. 733-738.

[EET 97] “LogicVision product combines intellectual property, design ser- vices – ITC sees first commercial mixed-signal BIST,” in EE Times, November 03, 1997, Issue: 979, Section: News.

[Einw98] K. Einwich, P. Schwarz, P. Trappe, T. Chambers, G. Krampl, H. Zojer and S. Sattler, “Virtual Test of Complex Mixed-Signal Telecommunication Circuits Reusing System-level Models,” in Proc. International Mixed-Signal Test Workshop, 1997, pp. 237- 242.

[Engi99a] N. Engin, Improving the Test Time and Product Quality with DOTSS, Philips report no: RNB-C/47/99I-023, January 1999.

[Evan99] A. C. Evans, “Applications of Semiconductor Test Economics and Multisite Testing to Lower Cost of Test,” in Proc. Interna- tional Test Conference, 1999, pp. 113-123.

[Fang00] L. Fang, M. Stancic and H. Kerkhoff, “Mixed-Signal Core-Based Testing,” in Proc. European Test Workshop, 2000, pp. 279-280.

[Flah93] E. Flaherty, A. Allan and J. Morris, “Design for Testability of a Modular, Mixed Signal Family of VLSI Devices,” in Proc. Inter- national Test Conference, 1993, pp. 797-804.

[Fris97] A. Frisch and T. Almy, “HABIST: Histogram-Based Analog Built-in Self-Test,” in Proc. International Test Conference, 1997, pp. 760-767. 2.10. BIBLIOGRAPHY 45

[Groc97] A. Grochowski, D. Bhattacharya, T.R. Viswanathan and K. Laker, “Integrated Circuit Testing for Quality Assurance in Man- ufacturing: History, Current Status, and Future Trends,” in IEEE Transactions on Circuits and Systems-II: Analog and Dig- ital Signal Processing, Vol. 44, No. 8, August 1997, pp. 610-633.

[Harr99] P. Harrod, “Testing Reusable IP - A Case Study,” in Proc. In- ternational Test Conference, September 1999, Atlantic City, NJ, USA, pp. 493-498.

[Kaal96] V. Kaal and H. Kerkhoff, “On the Optimization and Optimal Selection of Tests for Analog/Mixed-Signal Macros,” in Proc. International Mixed-Signal Test Workshop, 1996, pp. 31-34.

[Kao 92] W. Kao, J. Xia and T. Boydston, “Automatic Test Program Generation for Mixed Signal Ics via Design to Test Link,” in Proc. International Test Conference 1992, pp 860-865.

[Lamm98] J.P.M. van Lammeren, “ICCQ: A Test Method for Analogue VLSI Based on Current Monitoring, ” in Proc. International Mixed-Signal Test Workshop, 1998, pp. 169-173.

[Lind98] W.M. Lindermeir, T.J. Vogels and H.E. Graeb, “Analog Test Design with IDD Measurements for the Detection of Parametric and Catastrophic Faults,” in DATE Conference, 1998, pp. 822- 827.

[Lofs96] K. Lofstrom, “Early Capture for Boundary Scan Timing Mea- surements,” in Proc. International Test Conference, 1996, pp. 417-421.

[Maho87] M. Mahoney, DSP-Based Testing of Analog and Mixed-Signal Circuits, IEEE Computer Society Press, 1987.

[Maly96] W. Maly, “Future of Testing: Reintegration of Design, Testing and Manufacturing,” keynote in European Design and Test Con- ference, 1996.

[Maso99] R. Mason and S. Ma, “Mixed Signal DfT at GHz Frequencies,” in Journal of Electronic Testing: Theory and Applications, Vol. 15, 1999, pp. 31-39. 46 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

[Milo94] L. Milor and A.L. Sangiovanni-Vincentelli, “Minimizing Produc- tion Test Time to Detect Faults in Analog Circuits,” in IEEE Transactions on CAD, Vol. 13, No. 6, Jun. 1994, pp. 796-813.

[Nagi99] N. Nagi, “System-on-a-Chip Mixed-Signal Test: Issues, Current Industry Practices and Future Trends,” in Proc. International Mixed-Signal Test Workshop, 1999, pp. 201-211.

[Ohle91] M.J. Ohletz, “Hybrid Built-in Self-Test (HBIST) for Mixed Ana- logue/Digital Integrated Circuits,” in Proc. European Test Con- ference, 1991, pp. 307-316.

[Ohr 99] S. Ohr, “Analog intellectual property slow to start trading,” in EETimes.com, URL: http://www.eetimes.com, March 1999.

[P1500] IEEE P1500 website, http://grouper.ieee.org/groups/1500/

[Pan 97] C.-Y. Pan and K.-T. Cheng, “Test Generation for Linear, Time- Invariant Analog Circuits,” in Proc. International Mixed-Signal Test Workshop, 1997, pp. 93-100.

[Park93] K.P. Parker, J.E. McDermid and S. Oresjo, “Structure and Metrology for an Analog Testability Bus,” in Proc. International Test Conference, 1993, pp. 309-317.

[pllBIS] pllBIST website, http://www.logicvision.com/solution/pllbist.htm.

[Reno98] M. Renovell, F. Azais and Y. Bertrand, “Optimized Implemen- tations of the Multi-Configuration DFT Technique for Analog Circuits,” in DATE Conference, 1998, pp. 815-821.

[SIA 99] International Technology Roadmap for Semiconductors, 1999 Edition.

[Silv97] J. da Silva, A. Leao, J. Matos, J. Alves, “Implementation of Mixed Current/Voltage Testing Using the IEEE p1149.4 Infras- tructure,” in Proc. International Test Conference, 1997, pp. 509- 515.

[Soma90] M. Soma, “A Design-for-Test Methodology for Active Analog Filters,” in Proc. International Test Conference, 1990, pp. 183- 192. 2.10. BIBLIOGRAPHY 47

[Soma96a] M. Soma, “Challenges in Analog and Mixed-Signal Fault Mod- els,” in IEEE Circuits and Devices Magazine, January 1996, pp. 16-19.

[Soma96b] M. Soma, “Automatic Test Generation Algorithms for Analogue Circuits,” in IEE Proc.-Circuits Devices Syst., Vol. 143, No. 6, December 1996, pp. 366-373.

[Sunt95] S. Sunter, “The P1149.4 Mixed Signal Test Bus: Costs and Ben- efits,” in Proc. International Test Conference, 1995, pp. 444-450.

[Sunt97] S. Sunter, “Mixed-Signal BIST: Does Industry Need It?,” tuto- rial in International Mixed-Signal Test Workshop, 1997.

[Tayl98] T. Taylor, “Standard Test Interface Language (STIL), Extending the Standard,” in Proceedings of International Test Conference, 1998, pp. 962-971.

[Tone96] M.F. Toner and G.W. Roberts, “A Frequency Response, Har- monic Distortion, and Intermodulation Distortion Test for BIST of a Sigma-Delta ADC,” in IEEE Transactions on Circuits and Systems-II: Analog and Digital Signal Processing, Vol. 43, No. 8, August 1996, pp. 608-613.

[Whet96] L. Whetsel, “Proposal to Simplify Development of a Mixed Sig- nal Test Standard,” in Proc. European Design and Test Confer- ence, 1996, pp. 400.

[Will81] T.W. Williams and N.C. Brown, “Defect Level as a Function of Fault Coverage,” in IEEE Transactions on Computers, Vol. C-30, No. 12, December 1981, pp. 987-988.

[Will90] R.H. Williams and C. F. Hawkins, “Errors in Testing,” in Proc. International Test Conference, 1990, pp. 1018-1027.

[Xing98] Y.Xing, “Defect-Oriented Testing of Mixed-Signal IC’s: Some Industrial Experiences,” in Proc. International Test Conference, Oct. 1998, pp. 678-687.

[Zori99] Y. Zorian, “Testing the Monster Chip,” in IEEE Spectrum, July 1999, pp.54-60. 48 CHAPTER 2. THE MIXED-SIGNAL TEST PROBLEM

[Zwem98] T. Zwemstra, private communication, Philips Semiconductors, Nijmegen, The Netherlands, January 1998. Chapter 3

A General Framework for Mixed-Signal Test Generation and Testing

3.1 Introduction

The existing design and test approach for mixed-signal circuits has already been introduced in chapter 2. As has been pointed out, evidence shows that decreasing the test development and debugging time is necessary for both prototype and production testing of mixed-signal IC’s. In this chapter, a new approach for the design-test link of mixed-signal IC’s will be intro- duced. First, the present data flow between design and test development processes will be discussed and drawbacks will be pointed out. Then the possible links at each level of design and test will be discussed. Finally, the state of the art in design-test link will be summarized and our own framework for a design-test link will be presented. Throughout this chapter, the term analog will be used for all analog and mixed-signal IC blocks. The reason is that the requirements and approaches taken for linking design and test flow for mixed-signal blocks are similar to those for purely analog blocks. The specifications on which the testing strategy is based are also expressed in similar terms for both purely analog and mixed-signal blocks.

49 50 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

3.2 Design and Test Flow: The Current Status

The argument for linking design and test is the inefficiency of keeping them apart. Having a look at the present situation will make it clear at which points this inefficiency is present and how it can be improved. In figure 3.1, the flowchart of the present design and test development flow can be seen. For simplicity, the digital part of the flow has been left outside the picture for the moment. In the digital case the development of tests proceeds in parallel with the design process, and test generation is automated unlike the analog case. As can be seen in figure 3.1, the design and test development are sequen- tially performed in a traditional design and test flow. What is of importance here, is the fact that the test debugging and IC debug and evaluation times determine when the product can be shipped. Moreover, the development of the test program is done manually. This not only costs a large amount of manpower, but it is also error-prone. Furthermore, the verification of the test program is only based on the limitations of the test equipment, and no verification in terms of IC quality aspects is present. The inefficiency points described the previous paragraphs will be a starting point for the discussion of design-test integration. The general aim is to start test generation as early as possible, and to make direct use of the data present in the design environment. In this chapter, a general treatment will be given on how test generation can proceed in a more design-integrated way, leaving the distinction between production and prototype test outside the discussion. Once the framework is described, an example implemented for prototype test will be presented in chapter 4. It has to be pointed out that the discussions presented in this chapter are based on a specification-oriented testing approach at macro level for the analog part of the IC. At present, this is inevitable for analog blocks, since there is not a well-defined link between the circuit topology of a macro and the macro behavior as in the case of digital design. In order to be able to make the link between the conventional top-down design process and the test development process, the analog circuitry must be treated first at the block behavioral level and more data from the circuit operation can be taken into account during test development process as the design of the blocks proceeds. 3.3. HIGH-LEVEL CONSIDERATIONS 51

IC Specifications

Design Verification IC Design

DfT Insertion

IC Debug and Evaluation Test Interface Board Development Design

Prototype Test Debugging Manufacture (offline)

Test debugging (online)

Mass Production

Figure 3.1: Traditional design and test development flow

3.3 High-Level Considerations

Because a design-test link aims to start test development from the early design steps, it is necessary to look at the design and test considerations at the IC level, which corresponds to the begin of the design process. The design of the IC starts with a set of specifications. The high-level design process is based on determining the top-level IC architecture with the ana- log and digital blocks, and define the behaviors of each block quantitatively such that the IC level specifications can be verified by means of behavioral simulations. At the same time, system-level testability has to be defined: The qualitative descriptions of tests (i.e. which parameter has to be mea- 52 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING sured, which blocks should be tested together and which separately, etc.) are also determined. Only when the designs for standard blocks such as ADC converters are being reused, the test functions for these blocks can also be ready at this stage owing to test reuse. As explained in the previous chapter, macro-based testing is a standard approach for most analog blocks. The main reason for this is the fact that the traditional testing approach for these blocks is functional. An impor- tant high-level consideration for macro-based testing is the accessibility. Whether it is feasible to make each macro accessible, and how the testing procedure can be adapted when this is not possible, are decisions which have to be made at this stage. Since any access blocks introduced can in- fluence the operation of the IC, these decisions must be made on basis of both design and test requirements. When the decisions on accessibility are made, the specifications of DfT and the access modes for each macro have to be defined. An important consideration in DfT and testing of analog macros is the loading between two blocks. Most analog blocks are designed such that the loading between connected blocks is optimized. However, in the situations where this is not the case, inserting a DfT block, e.g. a multiplexer, be- tween these blocks can result in erroneous interpretations of the results of functional tests. An example of this is shown in figure 3.2. In figure 3.2(a), the interface between two analog macros is seen. For the sake of simplicity, only the real (resistive) part of the input and output impedances of analog macros is considered in this example. Macro1 has output resistance Ro1 and Macro2 input resistance Ri2 . Assuming DC operation, this results in loading between the two blocks so that the effective voltage input at the second macro becomes

Ri2 Vi2 = Vo1 (3.1) Ri2 + Ro1 ·

When DfT in the form of a multiplexer is added, the second situation arises. As shown in figure 3.2(b), the new situation is that the effective output resistance of the first macro has increased by the ‘ON’ resistance RON of the multiplexer, thus clearly causing more loading during normal operation. The third case displays the change in this situation during testing, this time the first macro is an arbitrary waveform generator (AWG) and this time 3.3. HIGH-LEVEL CONSIDERATIONS 53

Macro1 Macro2

Ideal + + Ideal R R Block V o1 Block o2 Ri o1 V Ri 1 Behavior i2 2 Behavior C - L -

(a)

Macro1 DfT Macro2

Rmux

Ideal + + Ideal R R Block V o1 Block o2 Ri o1 V Ri 1 Behavior C C i2 2 Behavior L Lm - -

(b)

AWG DfT Macro2

Rmux

+ + Ideal Ideal R R AWG oAWG Block o2 V V Ri oAWG C C i2 2 Behavior L Lm - -

(c)

Figure 3.2: Loading, in case of: (a) No DfT, (b) DfT, in operation mode, (c) DfT, in test mode. the voltage at the input of the second macro becomes

Ri2 Vi2 = VoAW G (3.2) Ri2 + RoAW G · To give a numerical example, suppose that there is a strong coupling be- tween the two macros, thus Ro1 = 400Ω and Ri2 = 2000Ω. In this case, when the first macro is producing a 3V DC voltage, the input at the second macro will be 2.5V . Normally, the multiplexer resistance will be quite low, so assume that it is 100Ω. In this case, the situation (b) hardly makes any 54 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

change in the operation of the two blocks (Vi2 = 2.4V ). However, when the second macro is tested separately by applying an AWG signal of 3V DC as in figure 3.2(c), the result can look erroneously faulty, depending on the output resistance of the AWG. When the AWG has an output resis- tance of value comparable to Ro1 the output measured from Macro2 will be similar to the operational mode value, assuming that there is no loading problem at the output of the block. However, if the output resistance of the AWG is close to zero, then the input to Macro2 will become 2.86V DC, and its output will possibly deviate from the tolerance interval. The same considerations apply obviously to the output of a macro, where the input resistance of measurement equipment plays the same role as RoAW G . For AC and dynamical operation in the first case, the frequency-domain transfer function between Vo1 and Vi2 can be written as V (jw) R 1 i2 = i2 (3.3) V (jw) R + R 1 + j( w ) o1 i2 o1 · wc

1 Ro1 +Ri2 where wc = . In other words, the frequency band for the signal CL Ro1 Ri2 · · wc between the two macros is limited with frequency fc = 2π . In the second case, the transfer function becomes second-order,

Vi2 (jw) Ri2 = 2 Vo1 (jw) Ri + Rmux + Ro w Ri RmuxRo CLCL 2 1 − 2 1 m +jw(Ro1 RmuxCL + Ro1 Ri2 CL + Ro1 Ri2 CLm + RmuxRi2 CLm ) (3.4)

The frequency behavior of this transfer function is presented in figures 3.3 and 3.4 for varying cases of output impedance and load capacitances. A similar relation is valid in the third case, which can be obtained by sub- stituting VoAW G (jw) for Vo1 (jw) and RoAW G for Ro1 . This implies that the measured signal should have a frequency spectrum much smaller than min(f1, f2) where f1 and f2 are the cut-off frequencies of the transfer func- tion given in equation 3.4. In case of an AC measurement the measured frequency f should be chosen such that f min(f1, f2). This will not cause a problem most of the time (see figures 3.3and 3.4), but in case of very high frequency measurements it should be taken into account. Frequency com- pensation techniques can then be employed on the device interface board (DIB) to improve the measurement quality. 3.3. HIGH-LEVEL CONSIDERATIONS 55

dft frequency response 950m

900m

850m

800m

750m

700m

650m

600m

550m

500m

450m

400m Volts Mag (lin)

350m

300m

250m

200m

150m

100m

50m 0

100k 1x 10x 100x 1g 10g 100g 10k Frequency (log) (HERTZ)

Figure 3.3: The change in the interconnect bandwidth with changing out-

put resistance. The magnitude response is plotted for Ro1 = 100Ω to

Ro1 = 100MΩ .

Obviously, the effect of placing DfT in an analog block is not comparable to the digital case. The effects can be more relevant to system operation, as well as to the measurement results. Therefore, the decisions on the placing of DfT are made with respect to performance and test requirements. These choices are heavily dependent on the application so they will not be discussed here. After the high-level architecture and the high-level DfT blocks are chosen, the design has a considerable amount of data to be used in test development. The following steps can then already be taken:

Access conditions for each macro can be defined, • For functional testing, tests can be selected based on the block spec- • ifications. 56 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

dft frequency response

800m

750m

700m

650m

600m

550m

500m

450m

400m

350m Volts Mag (lin)

300m

250m

200m

150m

100m

50m 0

100k 1x 10x 100x 1g 10g 100g 1t 10k Frequency (log) (HERTZ)

Figure 3.4: The change in the interconnect bandwidth with changing load capacitance. The magnitude response is plotted for CL = CLm = 1fF to CL = CLm = 10pF .

The above points imply that if all the analog testing is to be done func- tionally, there is in fact enough data to write the tests at this point, since the function of each block and how it can be accessed is already known.

3.4 Macro-Level Considerations

The automatic generation of test programs based on a transistor-level netlist is still a largely unsolved problem for analog circuits (see section 1.2.2.1). No general algorithm exists to perform this task, although algo- rithms based on specific input and/or circuit types have been developed (see section 2.5.4). Because of the difficulty of generation of test input directly from design data, macro-based testing has been adopted as the standard for the test of mixed-signal and analog blocks. 3.4. MACRO-LEVEL CONSIDERATIONS 57

In the framework that will be described in this chapter, a specification- based test approach will be used for analog macros. When the specifica- tions of each macro are known, each block can be designed separately, to be combined for placement and routing afterwards. The design process pro- ceeds in completely different manners for digital and analog blocks. The digital design process is an automated flow from description in a hardware description language such as VHDL or Verilog, down to gate level design. The insertion of scan flip flops is done automatically by means of existing CAT tools. Analog design at block level is, however, not automated in spite of all the research efforts spent in this direction ([Kras99], [Deby98], [Leen91]) in the last two decades. The design of an analog block is thus done manually, using the CAD simulation tools for verifying the design functionality and the conformity to the specifications. During the design of a macro, its functionality is continually verified by means of simulations. In fact, after the macro design is complete, the transistor-level design itself and the simulation results are the two main sources of information that are available in the design environment. There is an apparent analogy between simulating and testing an analog macro. Firstly, a simulation file contains the commands for generating input(s) for the macro and for analyzing the output(s), and this is in fact what is also required in test, so a link seems possible. Secondly, simulations at the end of the circuit-level design stage give an estimation of how good the macro possibly can perform, unlike the macro specifications from high- level design which give only the outer limits which should be satisfied. If a specification requires a maximum total harmonic distortion (THD) of - 60dB, the macro is known from its simulation to be able to perform up to -70dB, but it is only performing -61dB during test, this can be related to a soft fault which can have other consequences which are not detected by the tests applied, but can cause the IC fail when placed on a board. Taking final simulation as basis can prevent this and similar testing errors. 58 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

3.5 Simulation Support for Test

3.5.1 Simulation-Based Test Generation

In spite of the similarities explained in the previous section, there exist also a number of differences between simulating and testing a macro. The first point of difference is that the circuit parameters tested and those sim- ulated for verifying the same aspects of a macro are in general not the same. An example is the pole-zero simulations performed by the designers to determine the stability of their circuit. The test counterpart of such a simulation would be a step response, or FFT test. Even when the simu- lated parameters are assumed to be the ones to be measured during a test, other differences arise. First of all, the ideal sources normally used during circuit simulation have to be replaced by test equipment, which comprise nonideal features such as input and output impedances, and signal limita- tions. Secondly, basing the test on a simulation requires not only the test input, but also some a priori information about the possible output. This a priori information is used to setup the test parameters such as sampling time, voltage interval, etc. If the above points do not form a problem, the standard simulation types (AC, DC, TRAN) can be mapped into a measurement procedure. However, the mapping of more complex simulations (e.g. distortion analysis) can not be so readily defined, since the simulation and test way of obtaining the distortion figures will be completely different. It has been discussed in chapter 2 that costs are an important criterium for testing, especially in the production test stage. Estimating the costs of the tests generated can therefore be desired. However, this requires a model of test costs based on the test time, test resources, etc. Such a cost model is not available at this level for analog tests, so at present it is not possible to use it in a simulation-test link. 3.5. SIMULATION SUPPORT FOR TEST 59

Simulation Test

Ideal sources Real instruments and connec- tions Signal No loading Nonideal input/output Generation impedances, cable impedances No limits for voltage, fre- Limits for all the parameters of quency, etc.of the generated the generated signal. signals Ideal signal evaluators Real instruments No timing or triggering Correct triggering source, sam- problems pling time, etc. have to be sup- plied Signal Simulations calculated from Some a priori knowledge of the Measurement equations, so a priori knowl- signal is necessary to set up edge of the signal to be mea- measurement parameters such as sured is not required voltage/current range, aperture time, etc.

Simulation type (AC, DC, TRAN) can be mapped to test type and/or parameters Parameters Some parameters in simulation can directly be mapped into measurement procedure The analysis types that require numerical methods The cost of measurement de- (e.g. transient) are expen- pends on the time length and sive in terms of CPU time. Cost the resources used Others that can be calcu- lated by matrix manipula- tion or symbolic methods are cheaper. It is necessary to put a waiting DC and AC calculations are time between the application of Settling time done using methods that the stimuli and the first measure- begin from steady-state ment to allow for the settling of the transients.

Table 3.1: Common points and differences between simulation and test 60 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

The points at which simulations and tests differ from each other are sum- marized in table 3.1. The main conclusion that can be drawn here is that the only possible direct link between simulation and test is the ‘aim’, i.e. the design parameter to be simulated/measured. The definitions of ana- log simulations are always included in the simulation environment. This is done either together with the circuit input, as in SPICE, or in the case of graphical user interfaces such as in Cadence, by means of forms filled with simulation type and simulation parameters. In both cases, the fol- lowing information (needed for making a test program) is missing from the simulation data:

which design parameter is to be obtained, • a priori information about waveforms to be captured and used in a • tester setup (such as sampling rate for an analog waveform).

If this information is included in the simulation input in a standard format and/or is extractable from the simulation results, a general link can be defined between design simulation and test program in terms of design parameters. The required information can be in this case gathered from the simulation file by means of a data extraction tool.

3.5.2 Virtual Testing

It has been explained in the previous sections that test debugging forms a TTM and cost bottleneck for mixed-signal IC’s. Performing test debugging in the traditional manner has the following consequences:

The TTM increases, because the time between first silicon and mass • production increases, The costs increase because of debugging time spent on expensive test • equipment.

In the last decade, a new CAT approach called ‘virtual testing’ has gained importance especially because of its capability of preventing the negative TTM and cost effects of on-line test debugging. Virtual testing is the devel- opment and debugging of tests in a simulation environment using models 3.5. SIMULATION SUPPORT FOR TEST 61 of testers and the test interface. In other words, virtual testing brings the advantage that test debugging can begin before the first silicon is ready. In this way, the on-line debugging time decreases (or vanishes in the ideal case) and earlier shipping is possible. A side advantage which has been reported is that the test engineer who spends less time on test debugging can concentrate more on the quality and costs of his/her test program. A general diagram of a virtual test setup is given in figure 3.5.

SIMULATION ENVIRONMENT

Test Configuration Tester DIB DUT Instances Models Design Model

Test program Test program generation verification

Test Program

Figure 3.5: General view of virtual test possibilities

Definition 3.1 A test configuration is a description of a test method in terms of test resources used, test signal sourcing and measurement com- mands, and ATE-DIB and DIB-DUT connections.

In other words, a test configuration is an implementation of a test method. An example can be a schematic comprising an AWG, DUT and frequency analyzer describing a frequency domain test. In this and following chap- ters distinction will be made between generic test configurations and test configuration instances. 62 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

Definition 3.2 A generic test configuration is a setup that includes all the components required for running the test with different test parameter val- ues, such that the ranges of voltage, frequency, etc. related to the generated and measured signals can be varied.

Once it is made, a generic test configuration can be used and reused by ‘instantiating’ it for a specific macro, i.e. setting up the test ranges in accordance with the specifications of the macro being tested.

Definition 3.3 A test configuration instance is the ‘instantiated’ form of a generic test configuration for a specific macro, test signal values and measurement ranges.

The resulting test configuration instance can be converted into executable code for the target tester. For this, the virtual test environment must contain code libraries for the target tester and a code generator to convert the information into code. An integrated environment with this ability has been described in [Kao 92]. A test program is created by making a sequence of test configurations and converting this sequence via the code generator. The verification of a test program consists of checking whether the test hardware and software work together to execute the steps of the test pro- gram. Any errors in the setup of the ATE, flaws in the design of DIB, errors related to ATE limitations which are not taken into account, etc. have to be debugged during the test program verification. This is different from test program evaluation in the sense that in verification the capability of the test program to detect manufacturing faults, to run in a given amount of time, etc. are not checked. The test program verification link in virtual testing (see figure 3.5) consists of simulation-based verification by using tester models, DIB and DUT de- sign. The DIB and DUT designs can be simulated at any abstraction level, depending on the simulation time/accuracy trade-off the given design re- quires. The tester models in the commercial ‘virtual test’ environments are supplied by the tester companies themselves. The tester models are in some cases multiple-level, since the modeling of a modern tester at high accu- racy can result in very complex models with high simulation times [Bate92] which sometimes may be unacceptable. 3.6. REUSE ISSUES IN DESIGN AND TEST 63

Industrial virtual test environments are in general based on industrial ATE as target tester. An example of the creation of a virtual test capability for a VXI-based test platform is given in [SanS99]. Another part of design-test link for the same test platform will be presented in Chapter 4.

3.6 Reuse Issues in Design and Test

It has been explained in chapter 2, that IP-based design and core reuse have had drastic effects on IC industry in the recent years. Reuse of both digital and analog blocks is in fact not new, although usually applied by ‘copy- and-paste’, rather than a systematic methodology. For digital blocks, the common usage of IP cores has brought more systematic reuse methodologies into design practice, in which tests supplied by the IP manufacturers are also inserted and processed systematically. In case of reuse of mixed-signal tests, the important issues are:

Macro definition • Generic test documentation • The definition of macros depends on the product class to which the reuse framework will be applied. Considerations such as which blocks are often used, which parameters are required for block specification, etc. will be important in this choice. For each macro stored for reuse, formats for storing generic test data also have to be introduced. Overhead in this data seems often to present a problem for analog and mixed-signal blocks [Engi99a] resulting in non-systematic ‘copy-and-paste’ reuse techniques. So the main choice is that of grainsize of the macros and this is based on a trade-off between the documentation efforts and how detailed the IC will be tested.

3.7 State of the Art in Design-Test Link

In figure 3.6 a general view of design and test domains with related hier- archical steps is given. In the figure the main possible links are presented. 64 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

Some of these links are presently available in CAD and CAT software and ATE hardware, some are still under investigation. At this point it is useful to note that the terms ‘design-test link’ and ‘design-test integration’ are often used in the test world synonymously with virtual test. This is prob- ably because the commercially available integrated design-test frameworks put the focus on this issue. However, in the light of the definition made in the beginning of this chapter, all test frameworks using design data in a systematic way in order to improve the TTM, cost or quality figures of an IC fall under the term design-test link. In figure 3.6, an overview of the main channels where efficiency gains can be expected by means of linking design and test, are illustrated. It can be seen that these channels are:

the automatic generation of the test program based on design data, • evaluation of the test program based on design topology/layout, • off-line verification of the test program by means of DUT and DIB • designs.

In the rest of this section, the state of the art in the design-test link within these three main areas will be presented.

3.7.1 Virtual Test Software

In the last decade many virtual test frameworks have been developed by EDA and ATE manufacturers. Teradyne as the first ATE company has developed an event based simulation program that enables communication between tester models and a circuit simulator [Aust93]. This platform, named IMAGE Exchange, has been used in various virtual test frameworks such as DANTES [Bate92],[Kao 92] by Cadence and Saber-IC Pro by Anal- ogy [Analog]. DANTES, first of its kind, has both off-line test verification and test pro- gram generation capabilities. Tester model libraries and tools for develop- ing DIB are available. Each test configuration has to be entered manually by the designer or test engineer and a set of configurations is edited into a test plan in the sequencer tool. This test plan is then automatically converted into a test program [Kao 92]. 3.7. STATE OF THE ART IN DESIGN-TEST LINK 65

High level design Test plan Test plan generation

Test program Transistor level generation design

Test program Test evaluation program

Design layout

DIB Design Tester models

Test program verification

Figure 3.6: Design-test flow, possible links

The second well-known virtual test framework is Saber-IC Pro from Analogy. The possibilities of this framework are similar to those of DANTES, except for the fact that it uses the Saber simulator, which is based on a differ- ent algorithm that is claimed to have better performance for mixed-signal simulations and better convergence properties for analog simulations. A number of virtual test solutions have also been presented by the research community. A well-developed virtual test concept is given in [Arno92], [Gold97], [Gold98], [Mieg97], [Mieg98], [Gram99]. The work described is focused on the generation of a test program from tester-independent test plan [Mieg97],[Mieg98] and interfacing a tester emulator with a simulation environment for test verification [Gold97][Gold98].

3.7.2 Test Plan Generation Tools

The generation of a test plan based only on analog circuit topology is a dif- ficult and to a large extent unsolved problem. The usability of the existing tools are limited since their development depends on certain limitations and assumptions with respect to the circuit topology and input signals. 66 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

More flexible solutions employ a macro-based approach which requires ei- ther behavioral models or a list of macro specifications combined with the high-level IC structure. The research on test plan generation has produced some methods. These methods are based on design information at various levels of abstraction, in general either on high level description or transistor level design (see figure 3.6). A method for generating test plans based on macro specifications and IC architecture is given in [Engi99b]. In [Kaal98] and [Naik93] methods for the propagation of test stimuli to embedded blocks are presented. No commercial tools for full automatic test plan generation for mixed-signal IC’s exist at present.

3.7.3 Test Program Evaluation Tools

The state of the art in test program evaluation has been discussed in section 2.5.5. As explained, defect-based test program evaluation is still in research phase, with few EDA companies presenting tools that can perform this task to some degree. Testify from Analogy [Maje98] based on Saber and Test Designer from Intusoft [TestDe] based on Spice are two examples of this. However, none of these simulation environments are optimized for speed, which is most of the time the main reason for not choosing defect-oriented test evaluation for analog tests. Also layout-based fault list extraction capability (such as presented in [Ohle96]) is not included in these test evaluation tools.

3.8 A General Framework for Design-Test Link

In the previous discussions, several aspects of a design-test link have been outlined. From these discussions, the requirements for a design-test inte- gration framework can be summarized as follows:

The test plan must be generated based on the data present in the • design database, The test development process must begin as early in the design as • possible, 3.8. A GENERAL FRAMEWORK FOR DESIGN-TEST LINK 67

Mixed-signal and behavioral simulation capability must be present • for test verification,

ATE models for test verification and debugging must be available, • Efficient fault extraction and fault simulation capabilities are required • for test evaluation.

Specifications

Test Planning High level design Design Database Production Prototype Test Plan Test Plan

RTL level digital design Transistor level analog design Gate level digital design

Production test program Prototype Test test program Defect Database Floorplan Statistics

Production Layout Fault list test evaluation

Production Prototype production test debugging Prototype testing Tester Production models testing

Mass Data sheet production

Figure 3.7: Suggested framework for design-test link

A general outline of a design-test flow which fulfills the specified require- ments is given in figure 3.7. Some of the existing problems with respect to the links specified in section 3.7 are treated in the following chapters (see 68 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

Design Parameter Value Unit Open-loop gain 60 dB ≥ Phase shift @ operating frequency 0.1 rad ≤ Unity gain bandwidth 50 kHz ≥ Output saturation voltage 4.5 V ≥ Output offset 100 mV ≤

Table 3.2: Short design parameter list for an operational amplifier instance the dark-colored boxes in figure 3.7). In chapter 4, a design-test framework for prototype test will be presented. First the definitions regarding this framework and the design-test methodology will be introduced here. The defect-oriented approach for test evaluation will be presented in chapters 5 and 6.

3.8.1 Definitions

Definition 3.4 Design parameters are the specifications of a macro that must be fulfilled for correct operation of an IC. The design parameters are determined at the high-level design phase. A generic macro function (i.e., amplification) with a set of design parameters (i.e., gain, bandwidth, offset-voltage) determine the selection of specification- based test configurations to fully test the macro. As an example, a short list of typical design parameters for an operational amplifier macro is given in table 3.2. Definition 3.5 Operating conditions are the additional parameters required to fully define the test situation; such as supply voltage, input voltage range, etc.. Operating conditions are not design parameters but they define under which conditions the test must be applied to the circuit. These can be values re- lated to input signals, measurement range and points. Operating conditions can also include environmental parameters such as temperature if the test specifically requires the control of these conditions. An example of opera- tion condition is the input voltage amplitude that has to be known to test an operation amplifier in order to determine the value of the bandwidth (design parameter). 3.8. A GENERAL FRAMEWORK FOR DESIGN-TEST LINK 69

Definition 3.6 Test parameters are variables that determine the tester settings for a certain measurement.

An example of a test parameter is the sampling rate that has to be set up for reading in an analog signal.

Definition 3.7 Control signals are IC input signals which determine the access to a macro through the on-chip DfT circuitry for performing a certain test.

An example of a control signal is the signal applied to the enable input of a multiplexer through which the input or output of an embedded analog macro can be accessed. Such a DfT setup is shown in figure 3.8. The control signal has to be supplied when e.g. Macro B is tested with a stimulus coming directly from the tester.

Test signal

Macro A Macro B

Control signal

Figure 3.8: Control signal for access to individual macros. The pins for test signal and control signal are primary inputs of the IC.

3.8.2 Design and Test Methodology

The design and test methodology for the generation of an evaluation test plan is based on the points discussed in this chapter. First, a macro- specification-based approach is adopted for analog and mixed-signal blocks. 70 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

This approach requires the access problems for embedded macros to be solved during design by adding DfT circuitry to make each macro input and output accessible from primary inputs and outputs of the IC. The DfT circuitry forms a basis for the application of analog tests for both production and prototype testing. The planning of production and prototype testing is illustrated in figure 3.7 as one block. The reason for this is the strong link between the two testing modes. In fact, especially when production testing is specification-based, the performance parameters obtained in the two tests are often the same. The main difference is that the prototype test will always include more detailed measurements and will be carried out without pass/fail thresh- olds. The design and test framework described in chapter 4 is developed for the generation of a test plan for prototype test. However, a similar approach can be used for the generation of a production test by developing the required test database in the production test software environment. The generation of the test plan is based on the macro design specification data provided by the designer. This data is used to make a selection of generic test configurations and instantiate this selection. The sequencing of this set of configurations and adding some other data as required (discussed in detail in chapter 4) results in the test plan. The conversion of the test plan to a test program depends on the code re- quired by the tester. The generation of a test program from a test plan can be performed in a library-based way. In this approach, the generic test con- figurations have a high-level description including which design parameters they obtain, which additional design information they require, and which connections between the tester and the macro have to be present in order to carry out the test. The same test configurations have also a lower level description which corresponds to the tester code that has to be included in the test program to perform the test. When a test configuration is in- stantiated, the instance data is first passed to the high-level test definition and then to the test code. In this way the code for the test configuration is complete and can be directly inserted into the test program code. It can also be seen in figure 3.7 that there is a link formed between the transistor-level analog design and test plan generation. This link is based on the simulation results provided by the designer. The simulation results are used for determining the design parameter value for a specific macro. 3.9. CONCLUSIONS 71

3.9 Conclusions

In this chapter, the requirements for a design-test link have been discussed. The most important arguments for linking design and test is the time and cost losses due to a separated approach of design and test. Because the aims of design and test processes are closely linked, the processes themselves must also be carried out in a systematic way in which data exchange is well-defined and easy. Various aspects of design-test link have been introduced. Virtual test repre- sents one of the possible links between design and test processes. Products for virtual testing are available which make it possible to simulate the test- ing process for debugging the production test. The design and test aspects which have not been implemented yet include a link between design simula- tion and test plan generation. The possibilities of forming such a link have been investigated and it has been concluded that the ‘aim’ of a simulation is the only possible link that can be defined in a general way between design simulation and test development. Based on the problems of interpreting analog circuit structure to be used for test generation, it has been concluded that a macro-based approach has the highest advantage for analog blocks. Such an approach has implications in the accessibility of macros and the interactions between macros. These issues have to be taken into account when this approach is adopted for prototype or production test. Finally, a general description of a design and test framework has been presented and the requirements and suggested design and test methodology have been discussed. A concrete example of the implementation of a design- test framework based on the concept given in figure 3.7 will be given in the next chapter.

3.10 Bibliography

[Analog] Saber-IC Pro website, http://www.analogy.com/Test/Apptech/ sabericpro.htm [Arno92] R. Arnold, M. Chowanetz, W. Wolz, K.D. M¨uller-Glaser, “Test/AGENT: CAD-Integrated Automatic Generation of Test 72 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

Programs,” in Proc. International Test Conference, 1992, pp. 854-859.

[Aust93] T. Austin, “Creating a Mixed-Signal Simulation Capability for Concurrent IC Design ans Test Program Development,” in Proc. International Test Conference, 1993, pp. 125-132.

[Bate92] S. Bateman and W. Kao, “Simulation of an Integrated Design and Test Environment for Mixed-Signal Circuits,” in Proc. In- ternational Test Conference, 1992, pp. 405-411.

[Deby98] G. Debyser and G. Gielen, “Efficient analog circuit synthesis with simultaneous yield and robustness optimization,” in Proc. IEEE/ACM International Conference on Computer-Aided De- sign, 1998, pp. 308-311.

[Engi99a] N. Engin, Improving the Test Time and Product Quality with DOTSS, Philips report no: RNB-C/47/99I-023, January 1999.

[Engi99b] N. Engin, H.G. Kerkhoff, R.J.W.T. Tangelder and H. Speek, “In- tegrated Design and Test of Mixed-Signal Circuits,” in Journal of Electronic Testing: Theory and Applications, Vol. 14, March 1999, pp. 75-83.

[Gold97] M. Goldbach, W. Glauert, T. Schlaaf and H. Grams, “Simulator- Independent Test Program Verification,” in Proc. International Mixed-signal Test Workshop, 1997, pp. 122-125.

[Gold98] M. Goldbach, H. Grams, W. Glauert, W. Hartl and G. Voit, “Simulation-Based Test Program Verification Using the SZ Test System Environment,” in Proc. International Mixed-signal Test Workshop, 1998, pp. 243-247.

[Gram99] H.Grams, W. Hartl, M. Hayn and W. Tenten, “Virtual Test Enables Test Program Vrification and Debugging without Tester Hardware,” in Proc. International Mixed-signal Test Workshop, 1999, pp. 99-103.

[Kaal98] V. Kaal, H. Kerkhoff and J. Hollema, “Structural Test Gen- eration for Embedded Analog Macros,” in Proc. International Mixed-signal Test Workshop, 1998, pp. 221-226. 3.10. BIBLIOGRAPHY 73

[Kao 92] W. Kao, J. Xia and T. Boydston, “Automatic Test Program Generation for Mixed Signal Ics via Design to Test Link,” in Proc. International Test Conference 1992, Washington DC, pp. 860-865.

[Kras99] M. Krasnick, R. Phelps, R.A. Rutenbar and L.R. Carley, “MAELSTROM: efficient simulation-based synthesis for custom analog cells,” in Proc. Design Automation Conference, 1999, pp. 945-950.

[Leen91] D.M.W. Leenaerts, “TOPICS: A new Hierarchical Design Tool Using an Expert System and Interval Analysis,” in Proc. Seven- teenth European Solid State Circuits Conference, 1991, pp. 37-40.

[Maje98] D. Majernik, C. Siegel and S. Somanchi, “Behavioral Fault Simu- lation of Large Mixed-Signal UUT’s Using the Saber Simulator,” in Proc. Asian Test Symposium, 1998.

[Mieg97] M. Miegler, “Techniques for the Generation of Test Routines from a High-Level Description,” in Proc. International Mixed- signal Test Workshop, 1997, pp. 115-121.

[Mieg98] M. Miegler, O. Kraus, H. Tauber, G. Krampl, S. Sattler and E. Sax, “Tester Independent Program Generation Using Generic Templates,” in Proc. International Mixed-signal Test Workshop, 1998, pp. 260-263.

[Naik93] R. Naiknaware, G.N. Nandakumar and S. R. Kasa, “Automatic Test Plan Generation for Analog and Mixed Signal Integrated Circuits Using Partial Activation and High Level Simulation,” in Proc. International Test Conference, 1993, pp. 139-148.

[Ohle96] M.J. Ohletz, “Realistic Faults Mapping Scheme for the Fault Simulation of Integrated Analogue CMOS Circuits,” in Proc. International Test Conference, 1996, pp. 776-785.

[SanS99] D.San Segundo Bello, R. Tangelder and H. Kerkhoff, “Mod- eling of a Verification Test System for Mixed Signal Circuits in HSpice,” in Proc. International Mixed-signal Test Workshop, 1999, pp. 81-89. 74 CHAPTER 3. A GENERAL FRAMEWORK FOR MIXED-SIGNAL TEST GENERATION AND TESTING

[TestDe] Test Designer website: http://www.intusoft.com/products/ testdesigner.htm. Chapter 4

MISMATCH: A Framework for Design-Test Link

The general setup of an integrated design and test environment has been explained in the previous chapter. An example implementation of such an environment will be presented in this chapter. The structure and operation of the system will be discussed in detail and subsequently evaluated using a mixed-signal IC as an example.

4.1 System Overview

MISMATCH (MIxed-Signal MAnipulation Tool for CHips) is an environ- ment providing an integrated flow of design and test activities. The test flow is aimed for the development of a test program for design evaluation. The evaluation test plan and program are generated automatically based on the high-level topology of the IC and the macro specifications. This evaluation test process requires not only a systematic inspection of mea- surement results but also a careful choice of the design parameters to be measured. The cooperation of the designer and the test engineer is in- dispensable to maintain the consistency between the activities at the two sides. Therefore, the primary aim of MISMATCH is to facilitate this coop- eration in an efficient and systematic way by formalizing the data needed for specification-based test plan generation. The design and test data has to be shared and exchanged in a compact, yet usable manner.

75 76 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

The discussion presented in this chapter will focus on analog and mixed- signal macros, since the framework is built around the conventional test techniques for digital blocks and the greatest challenges treated here are of analog nature. Further in the chapter, the treatment of digital blocks within the MISMATCH framework is explained.

Figure 4.1 presents a global view of the task division and (analog and mixed-signal) data flow in the MISMATCH system. The functionality of the system is divided into two parts which are integrated in the already existing commercial tools in these environments. The design environment Cadence has been used as CAD framework, while the CAT tool LabVIEW together with a mixed-signal tester make up the (evaluation) test environ- ment. The mixed-signal tester consists of an IMS ATS-200 digital tester and a VXI-based IntegraTEST mixed-signal test platform [Inte95]. The macro-level design data in the Cadence environment has been used in test plan generation. In this way, it is possible to start test plan generation at very early phases of the design process.

Design Database Test Database

Macro models Test configurations DfT cells Tester data

High-level topology

DfT cell model(s) Design Process Test Development Simulation files Process

Testing Test Program

Figure 4.1: Overview of the analog part of the MISMATCH environment 4.2. INTEGRATED DESIGN AND TEST FLOW 77

4.2 Integrated Design and Test Flow

A general overview of the test plan generation flow is shown in figure 4.2. In MISMATCH, the test program includes the specification-based testing of each underlying macro. For this reason, a test plan can be generated as soon as the macro function, macro specifications and interconnections are known. The first step of the test development process is the providing of this required data by the designer. The circuit schematic at the highest level, consisting of macros, DfT blocks and the interconnections between them, must exist in the design directory before test generation can begin. When this data is available, MISMATCH can be run to generate the test program. During test generation, each macro is processed separately. The designer is asked to enter the values of design parameters and operating conditions for each macro. This data is used to select a set of generic test configurations (shortly called tests from this point on) that can fully test the specified macro. This process is called test selection and will be discussed in section 4.6.2. The selected set of generic test configurations for each macro is called the macro test set. When a test set is obtained for each analog macro, the access to embedded macros is maintained by adding the commands for generating the digital test control signals to the test plan. For determining the control signal values for access, the DfT models and connections supplied by the designer are used (see section 4.7.1 for details). The last step of test plan generation is the addition of tester routing com- mands to the test plan. The commands to make and break tester connec- tions in such a way that the connections between tester-DIB-DUT are cor- rect for a specific test is called tester routing. For performing this task, the connection structure of the test framework is included in the test database and the routing commands are added to the test program based on this information. The steps of this process will be described in detail in section 4.7.2.

4.3 Test Database

The test database is an important information source for the operation of MISMATCH. The organization of the test database (LabVIEW envi- 78 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

Design Process Test Development Process

IC functionality Extraction of specifications per macro BEHAVIORAL LEVEL Macro specifications High level topology Test selection Macro interconnections DfT model(s)

Control signal generation Transistor level and tester routing Simulation results STRUCTURAL LEVEL Layout level Circuit layout Test evaluation

Process statistics Manufacturing Test Plan

Prototype IC

Figure 4.2: Integrated design-test flow in MISMATCH

ronment) can be seen in figure 4.3. The macro types, their tests and the related design parameters must be in accordance with those in the design database. For this reason all the test data that MISMATCH requires is kept and maintained in the test database by the test engineer. When a new test is added to the database, the necessary macro and design links and other relevant data are added or updated. Addition of a new macro to the design database requires interaction between design and test engineer, since decisions have to be made with respect to the specification list and the corresponding tests. The idea is that in applications where a lot of blocks are reused, the time and effort invested for putting up these databases will pay off in terms of shortened verification times. In this way, the reuse of past test development work, can happen in a systematic, time- and resource efficient manner. All the data required by the CAD-related functionality of MISMATCH is stored in the so-called test information files. These files include information about the specifications that are measured by the test, the required oper- 4.3. TEST DATABASE 79

TEST DATABASE

Test DfT access Help configuration library library library

Test Test VI’s information files

Test DfT access Routing module Access Power VI’s descriptions VI’s connection VI’s files

Figure 4.3: Structure of the LabVIEW test database

ating conditions and test parameters. More information about the format of these and other file types is provided in the MISMATCH final report [Engi97a].

Apart from the test information files, the test database includes the test configuration library. Each of the test configurations in this library is dedi- cated to test one or more design parameters of a macro in the MISMATCH macro library (see figure 4.4). These test configurations are implemented in the graphical language of LabVIEW. The programs written in this lan- guage for specifying a (set of) measurement(s) are called virtual instruments (VI’s), since their functions are similar to instruments made for the specific measurements that they perform. Likewise, the tests implemented in Lab- VIEW will be called test VI s. An example of a test VI is given in Appendix A. A VI consists of two parts, the front panel and the block diagram. The front panel looks similar to the outside of a test instrument, forming a user interface where data can be entered or test results can be seen. The block diagram is an implementation of the actual test program, consisting of stimulus and measurement commands and calls to the test equipment drivers. 80 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

As well as the test VI’s, there are also VI’s in the test database that main- tain other functions that are necessary for running the tests (see figure 4.3). These are:

power VI’s • routing VI’s • access VI’s • A power VI can be included in a test plan or run separately for powering up, powering down or resetting the system. A routing VI makes/breaks connections between the test equipment modules (e.g. arbitrary waveform generator) and the device interface board (DIB). The aim of an access VI is to generate the necessary digital signals for the DfT blocks. In this way, access to the macro under test (MUT) is guaranteed. The functional models of the DfT blocks are included in the DfT access description files (see figure 4.3). This information is used for determining the control signal values, as will be explained in section 4.7.1.

4.4 Design for Testability

As has been discussed in the previous chapters, access forms also the basis as to how IC’s and macros can be tested. In MISMATCH, it is assumed that each macro that has to be verified is directly accessible from primary inputs and outputs by means of DfT blocks. This is a design condition for mixed-signal IC’s in MISMATCH. This assumption holds in general at least for mixed-signal IC’s with a digitized analog structure, since the analog circuitry is usually only peripheral. Because of this, the digital inputs/outputs can be accessed in a standard manner by means of scan chains. As for register-controlled analog IC’s, the decision of putting more DfT is essentially a trade-off between area and extensive testability. This decision has to be based on the quality and cost requirements of the prod- uct. In the few cases that it is not possible to insert sufficient DfT, other possibilities can be used to make macros accessible. This is possible for example by making macros transparent for testing purposes as described in [Soma90]. If this kind of solutions are not possible, new larger macros have to be defined. 4.5. DESIGN DATABASE 81

4.5 Design Database

The information present in the design database is used for gathering design parameters and operating conditions data from the designer and converting the gathered data to a format that can be further handled by MISMATCH to generate the test program. Central to this functionality is the macro library. As seen in figure 4.4, this library consists of existing macro designs, macro design parameter lists, macro-test links and macro help files. The organization and content of the macro designs is purely a design issue and will not be discussed here. In the existing MISMATCH framework both the netlist and simulation files of macros are included in the macro library. The macro library serves both design and test purposes. For design purposes, the macro design files form a basis for reuse-based design methodology. To link the reuse of tests directly to this methodology, extra test data is added for each macro. This data consists of (see figure 4.4)

Macro design parameter list (list of all the macro design parameters • which can be a basis for testing),

DESIGN DATABASE

Macro DfT library library

Conversion functions DfT block designs

Macro SKILL Macro Macro-test Macro help interface designs parameter links files lists functions

Figure 4.4: Structure of the design database 82 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

Macro-test links (the links showing which test can obtain which macro • design parameter) Macro help (the help text about the test configurations, design pa- • rameters and operating conditions of the macro)

This data originates from the test information files which are part of the test database. These files are transferred from the test database to the design database each time MISMATCH is started. The test engineer is responsible for maintaining and improving the test information files based on discussions with the designer. The DfT library in the design database includes the DfT blocks used. In figure 4.5a, the simple DfT block used in the demonstrator circuit for MIS- MATCH and its connection with macros is shown. Its access conditions (that has to be present in the test database) are given in figure 4.5b. How- ever, the DfT structure can be a multiplexer, a test bus, scan flip-flop or another cell, depending on the design and test requirements. It is neces- sary to define in general how the structure must be used for test. For this purpose, the so-called access description is introduced for each DfT block. The idea is that each DfT structure can be defined as a set of rules (e.g., addresses in case of buses and correct enable/disable signals in case of mul- tiplexers) which determine the access for each enable condition. A similar principle has earlier been used for the testing of digital embedded reusable cores [Mari97].

4.6 MISMATCH CAD Data Flow

The MISMATCH steps that require data from the designer will be described in this section. The diagram of this flow is given in figure 4.7. The aim is to get the design parameters and operating conditions per macro from the designer. The interface to gather and organize the input from the designer has been implemented using Cadence SKILL routines. In this way, MISMATCH can be used by the designer as a part of the CAD environment (see figure 4.6). In MISMATCH, the test program generation process is initiated by the designer. When the block diagram of the IC and the design parameters of 4.6. MISMATCH CAD DATA FLOW 83

test_in_2 test_in_3 select_a select_b Macro 2 Macro 1 Macro 3 (MUT)

test_out_1 test_out_2 (a)

Line Select Mode 1 test 0 normal operation (b)

Figure 4.5: Example of a simple DfT cell for embedded macros the macros are determined, the test plan generation can begin. For this, the designer must choose which design parameters of this macro should be verified. During a MISMATCH session, he can enter this selection together with the limit values for each design parameter. If this information has been entered for each macro, the selection of a test set that covers all the selected design parameters is initiated. After the test set is selected, the design parameter values are used to instantiate each test. When extra information is required for the test, this data (i.e. the operating conditions of the test) is automatically requested from the designer.

4.6.1 Usage of Simulation Results

Simulations are the conventional approach for a designer to verify whether the design will operate correctly and meet its specifications (design ver- ification). The first step in IC design consists of translating the overall specifications of the IC to the IC block diagram, proper choice of macro functionality and values of the macro design parameters. The rest of the design process consists of designing the macros. Especially in industrial applications, where designs based on previous products are very common, 84 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

Figure 4.6: Appearance of the MISMATCH CAD interface

the macro designs from earlier products are often reused. The reuse may contain some small changes and improvements but the operating principles of the macro as well as the critical aspects of the macro functionality re- main unchanged. In such cases, the designer may typically aim to verify a selection of all the macro design parameters, since (s)he knows from the previous versions that the rest of the design parameters meet the require- ments. Either the design parameters that can be affected by the changes in the current version of the IC or those which form the basis for the per- formance of the IC are verified. The simulation of the macro design for obtaining these critical design parameters will give the designer an idea of the actual functioning of the macro. The parallelism between simulation and test and the advantages of being able to link the two have been discussed in chapter 3, concluding that the 4.6. MISMATCH CAD DATA FLOW 85

test_tri_opamp st_out test_opamp_vi test_in_opamp enable bidcs3 Tout test_in_vi pgate Vin_p resistor_in Oscillator vout Opamp Vout voutp_x vin_px V/I converter balref_x HIGH vin_nx voutn_x resistor_out x_on test_out_opamp LEVEL test_out_tri

dis_osc ref2p5V voutp_y TOPOLOGY vin_py V/I converter balref_y vin_ny voutn_y

y_on

Simulations

VDD

i8 118k M5 30/4

i1 M8 i5 200/4 net54 net54 i3 i6 VO

M3 net44 M6 VN 30/4 30/4 V P i10 net32 150k net48 MACRO 2k 1fF ... 2 net35 154.2/3 ...... i 15/4 15/4 M9 M4 M7 i7 i9 DESIGNS i4 V SS

Designer Generic test set selection Design parameter list Generic test set

Test set Design parameter values instantiation Operating condition values

Dedicated test set

Figure 4.7: Overview of MISMATCH CAD flow for the analog parts only link between simulation and test is the ‘aim’, i.e., the parameter mea- sured/simulated. For this reason, a link between simulation and test has been implemented in MISMATCH in terms of design parameters. Nor- mally, the designer has to enter design parameters per macro. However, for specific simulation types that are often used, there is the possibility of ex- tracting design parameters from a simulation result file. An example of this procedure is illustrated in figure 4.8. The user interface for the insertion of design parameters is shown at the lower left corner. In this case, the de- signer has chosen the design parameters gain, cutoff frequency and output 86 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK offset voltage for an operational amplifier. One of these, the cutoff fre- quency, is extracted from a simulation result file that the designer specifies. This is done using an algorithm developed for extracting cutoff frequency value from a frequency response simulation. A script implemented from this algorithm is included in the design environment. In order to use simulation results, the user interface activates this script which in turn calculates the relevant specifications from this simulation file. The value extracted then appears in the design parameter window (see figure 4.8) and is input for MISMATCH flow as the value the design parameter cutoff frequency.

SIMULATION RESULT MACRO-TEST LINKS Gain PARAMETER UNIT TEST Closed-loop gain - frequency_resp, dc_sweep. Closed-loop phase shift rad frequency_resp Unity gain bandwith kHz frequency_resp Output saturation voltage V dc_sweep Output offset V offset, dc_sweep Freq Ft

EXTRACTION ALGORITHM

Figure 4.8: Test set selection and design parameter extraction from simu- lation results

4.6.2 Test Set Selection

When the designer selects the design parameters, the design parameter- test configuration links are searched through and a test set is selected that covers the required design parameters set. This is achieved by means of a 4.6. MISMATCH CAD DATA FLOW 87 greedy algorithm that finds any set of tests that can evaluate all the chosen specifications. There are three reasons why this works well for our system: First, the number of tests that can evaluate a given specification is not very high. In most cases, the choice is between 3-4 tests. Second, there are no specific costs associated with each test. In other words, tests are assumed to have equal cost, so any test set that covers the specifications is acceptable. Third, there is no test cost / test time minimization included in this implementation. Since MISMATCH is an environment for verification test, cost minimization has not been considered as a goal. In general, if it is necessary to set up a similar framework with test cost constraints and if test libraries are larger than in this example, then it will still be possible to attack this problem with a more complicated algorithm. In this case, the problem can be related to the so-called set covering problem [Coud95], a well-known combinatorial problem which also occurs during some logic synthesis steps. The problem can be defined as: “Consider an m n matrix A where aij 0, 1 and a cost set C = c1, ..., cn where × n ∈ { } { } ci > 0. Find x 0, 1 such that ∈ { } 1   .   A x  .  · ≥    .    1  m 1 × n and cjxj is minimized.” In the above problem definition, A represents jP=1 the specification-test coverage matrix, such that aij = 1 if test j covers specification i, and aij = 0 otherwise. Here the number of the candidate test configurations is n and the number of specifications selected by the designer is m. The solution space consists of vectors x of size n, where a 1 at the kth location denotes that the kth test is selected. There are two conditions on the solution x. The first condition (that the vector A x has only 1’s) guarantees that all specifications are covered and the second· n condition (that cjxj is minimized) guarantees that the total cost of jP=1 the selected test configurations is minimized. Each test is assigned a cost, denoted e.g. by ci for the i’th test. How this cost can be selected is a 88 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK practical issue that strongly depends on the type of tests and the type of the product, but the main components of test cost will be test time and the test hardware used. These issues are/have been discussed in more detail in chapter 2 in the section on test costs. Although the set covering problem is NP-hard, there are a number of meth- ods in literature that can be used to solve it using heuristics or other alter- native methods. These techniques are studied among others in [Coud95], [Coud96] and [Gros97]. More details are not included here since the cost issue was not required for this implementation.

4.7 MISMATCH CAT Data Flow

Most of the steps for test plan generation run within the CAT environ- ment. Here, the input from the designer, the test database and the circuit structure are used to generate a test plan. The generated test plan can be executed directly or modified if required. A general description of the test hardware used is given in Appendix B. In this section, the operation of the CAT part of MISMATCH will be explained based on this system and the test database. As a result of the designer’s interaction with the MISMATCH CAD inter- face, a set of test VI’s are selected and transferred to the CAT environment. The CAT environment includes the test database and a part of the MIS- MATCH functionality. This functionality has been implemented in a Lab- VIEW VI called mismatch generate. A flowchart of the mismatch generate VI is given in figure 4.9. The main tasks of mismatch generate are taking care of control signal gen- eration, routing and sequencing of the test configurations selected in the design environment. The output is a test plan file that is readable and executable within the LabVIEW sequencer. This enables the test engineer to view the generated sequence of tests (and modify parts of the generated test plan or test parameters for individual tests if desired) and run the test program directly from the same interface. 4.7. MISMATCH CAT DATA FLOW 89

START

Read netlist and pinning list

Open output file

n=1

MUT=Macron

n=n+1 Search test set for MUT

N

All Test Y macros N set done? exists?

Y

END Calculate control signal values

Insert control signal generation in output file

k=1

T=Testk

Calculate needed switch connections

Insert routing commands in the output file

Insert the test T in the output file

All N Y k=k+1 tests n=n+1 done?

Figure 4.9: Flowchart of test plan generation for analog parts in CAT en- vironment 90 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

4.7.1 Test Control Signal Generation

It is assumed in MISMATCH that each macro is surrounded with sufficient DfT in order to be accessed from the IC pins. This assumption is justified by the small number of analog macros in digitized analog IC’s and the fact that these IC’s are functionally tested with full access in both prototype and production test. The control signals are logic values that are applied to the enable pins of the DfT blocks. The value of a control signal determines whether the neighboring macro is in normal operation mode or in test mode. Test mode means that the macro is isolated from the surrounding blocks and its inputs and outputs can be accessed from the DUT pins. In order to measure the correct signals, it is required that the control signals are generated in synchronization with the execution of test configurations. In MISMATCH, the control signal generation is inserted in the test plan before the sequence of test configuration instances corresponding to the same MUT. A representative pseudocode for the control signal generation algorithm used in MISMATCH is given in figure 4.10. The inputs of the algorithm are the circuit structure as read from the macro-level netlist (‘circuit’) and the macro under test (‘MUT’). In principle, the type of macro is not relevant for this operation since the MUT is seen as a black box. First the algorithm generates a list of DfT blocks that are connected to the MUT input (line 01 in figure 4.10); it then traverses this list searching for an access rule to connect each MUT input to the corresponding PI (line 06 in figure 4.10). Once the rule is found, the logic value it implies is assigned to the digital tester pin connected with this PI (line 07 in figure 4.10). The PI number corresponding to the DfT input, the MUT instance number and the generic input name for the MUT type is registered to be used for routing the tester channels to the MUT (line 08 in figure 4.10). Then a similar list is made of the DfT blocks which are connected to the outputs of the MUT, and the operations are repeated for these DfT blocks.

4.7.2 Automatic Routing for the Test Set

The connection of the DUT with the tester takes place via the device inter- face board (DIB), where the digital pins and analog test channels (referred from here on as ‘IMS channels’) are connected to the pins of the DUT. 4.7. MISMATCH CAT DATA FLOW 91

00 Algorithm generate_control_signals(circuit,MUT) 01 dft_list:=find_DfTAtInput (circuit, MUT); 02 for each DfTÎdft_list do 03 for each DFT_output Î output(DfT) do 04 MUT_input:=findpin(net(DFT_output)); 05 if MUT_input=VALID 06 (input(DfT),control(DfT),rule):=findrule(type(DfT),output(DFT)); 07 ATS_pin:=find_ATS(pin_no(control(DfT))); 08 generate_command(ATS_pin,test_mode(rule)); 09 register(pin_no(input(DfT)),MUT,generic_input_name(MUT)); 10 end if 11 end for 12 end for 13 dft_list:=find_DfTAtOutput (circuit, MUT); 14 for each DfTÎdft_list do 15 for each DFT_input Î input(DfT) do 16 MUT_output:=findpin(net(DFT_input)); 17 if MUT_output=VALID 18 (output(DfT),control(DfT),rule):=findrule(type(DfT),input(DFT)); 19 ATS_pin:=find_ATS(pin_no(control(DfT))); 20 generate_command(ATS_pin,test_mode(rule)); 21 register(pin_no(output(DfT)),MUT,generic_output_name(MUT)); 22 end if 23 end for 24 end for 25 end (generate_control_signals)

Figure 4.10: Test control signal generation algorithm

Load impedances and other required components are included on the DIB [Kerk97]. The hardwired connections of VXI modules to the switch matrix and the RF multiplexer are shown in Appendix B. According to these con- nections there are a number of IMS channels where each VXI instrument can be connected. The connections of the switch matrix and RF multi- plexer are made/broken by means of software commands included in the test plan. The routing algorithm checks from the test database which hardware each test in the test set requires. With these requirements and the input-output access data saved during control signal generation, the connection require- ments of the switch matrix are calculated for each test. There are also a number of connection rules (due to existing hardware connections, etc.) included in the test database. These rules are checked before the routing data is added to the test plan. A pseudocode representation of the routing algorithm is given in figure 4.11. For each connection specified in the test module connection file (see figure 4.3 and Appendix B), the VXI address of the tester module required by the test is given. Since the input name of the MUT is given only in a generic manner, the pin numbers and generic 92 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK input names registered during control signal generation is used to locate the pin number (line 04 in figure 4.11). Next the corresponding IMS chan- nel number is looked up from the pinning list (line 05 in figure 4.11), and the routing command connecting the calculated VXI and IMS numbers are generated in the test plan file (line 08 in figure 4.11).

00 Algorithm tester_route(circuit,MUT,test) 01 read_tmc(test); 02 for each connection do 03 VXI_no:=VXI(connection); 04 pin_no:=find_register(generic_MUT_input(connection)); 05 IMS_no:=search_pinning_list(pin_no); 06 no_conflicts :=check(VXI_no,IMS_no); 07 if (no_conflicts) 08 generate_routing(VXI_no,IMS_no); 09 endif 10 end for 11 end (tester_route)

Figure 4.11: Tester routing algorithm

Once generated, the test plan can be automatically converted to the format of the sequencer tool, SequenTEST 2, developed by Microlex [Sequ97], which is an extension to LabVIEW. Through the sequencer user interface, the test program can be viewed and executed. The test engineer can see the generated sequence of tests and the relevant control and routing inputs, and it is also possible to modify part of this test plan and/or parameter values manually for the tests if desired. In other words, the test engineer still has full control over the test plan.

4.7.3 Test Generation for Mixed-Signal Macros

The generation of tests for mixed-signal blocks such as ADC, DAC and PLL is carried out in a similar way as the analog blocks. When embed- ded, the access to this class of macros is possible by using scan cells at the digital side and analog DfT at the analog side. The generation of input and measurement of output is possible with test configurations in a simi- lar manner as in the pure analog case, since a test configuration can also 4.7. MISMATCH CAT DATA FLOW 93 include the generation and measurement of digital data. In this respect, the MISMATCH setup does not bring any limitations with respect to this class of macros. Test configurations with a statistical nature (such as his- togram test), or tests that require calculation of FFT (such as THD test) can be made by means of including the required post-processing steps in the test configuration. The tests which involve accurate timing measure- ments such as jitter test can in principle also be run, although the accuracy is dependent on the performance of the test instruments and DfT modules used.

4.7.4 Test Generation for Digital Macros

For digital blocks, the presence of full scan chains and accessibility of in- put/output lines of the digital part is assumed. This condition is readily satisfied when the lines that form the interface between the analog and digital blocks are accessible, which is also an analog requirement. The de- sign and test flow for digital parts typically includes VHDL synthesis, scan register insertion and the usage of an ATPG tool to generate test vectors. For MISMATCH, our local ATPG tool PODAL based on the PODEM algorithm [Goel81] has been used for the generation of digital tests. For testing digital blocks in MISMATCH, the designer generates test pat- terns with an ATPG tool as done in the standard manner. The inclusion of these tests in the complete test flow is possible since the LabVIEW envi- ronment used has also VI’s for forcing and measuring via the digital tester pins. Since only these pin groups are used for digital test, routing of various modules is not an issue here. The treatment of digital DfT is included in the ATPG sequences, so test control signal generation is also not used in the digital tests. The actions required to generate digital tests within MIS- MATCH is done by selecting the parts concerned and run a MISMATCH command (process digital macro) which activates the ATPG for that block. Test patterns generated in this way can either be applied separately via a stand-alone program, or included in the MISMATCH test program by using the LabVIEW digital test VI’s. 94 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

4.8 An Example: Design and Test of a Compass Watch

4.8.1 IC Overview

The MISMATCH framework described in the previous sections has been used for the design and test of a mixed-signal IC [Tang97] in order to eval- uate its possibilities and limitations. A block diagram of the demonstrator IC, a compass watch, is shown in figure 4.12. This IC combines digital wrist watch functions with an electronic compass and provides output in seven-segment display format. The functionality of the IC will be briefly explained in this section before the tests and their results are presented in the following sections.

example macro

watch testInBuf testBufOsc testInVI testOscVI current y_sensor source (V/I) digital oscillator analogue buffer switch pulse pulse switch DfT + DfT count display block - block arctan(x/y) driver detector testOutOsc x_sensor testOutBuf

selectXorY control logic

Analogue section Digital section

Figure 4.12: Block diagram of the compass watch IC

The compass watch IC measures the earth’s magnetic field by means of integrated fluxgate sensors [Diem96], positioned orthogonally as suggested in the block diagram. The voltage waveforms from the two sensors are processed in the digital part of the compass watch for calculating the x and y coordinates of earth’s magnetic field. The calculation is based on the so-called pulse position principle [Diem96]. This method can be explained as follows: In figure 4.13(b), the fluxgate sensor characteristics (magnetic induction B versus magnetic field H) is given. When the coil is excited by a triangular current waveform, the magnetic field Hexc due to this current will be as shown with the solid line in figure 4.13(a) (since H is proportional to 4.8. AN EXAMPLE: DESIGN AND TEST OF A COMPASS WATCH 95

the current). Since the Earth’s magnetic field (Happlied) at that point will be added to this waveform, the total magnetic field in the coil (Hexc +Happlied) will be as given with the broken line in figure 4.13(a). Combining this with the sensor characteristics, the induced flux versus time due to Hexc and Hexc + Happlied can be obtained as the waveforms in figure 4.13(c). The voltage induced is equal to the time derivative of the flux, so it is obtained as the waveform in figure 4.13(d). Following the explained relationships, when the sensor characteristics is known, the shift in the peaks of this waveform can be used to calculate Happlied. This calculation is made by means of digital processing methods to obtain the exact value of the magnetic field in the x and y directions.

FF

b) c)

Hexct

Vind

t

d) Happliedt

H=I a) excexc t1t2

Figure 4.13: Pulse-position operating principle of a fluxgate sensor

The detection of the magnetic field value has been implemented as follows. The part before the sensors is called the excitation part. This part is re- sponsible for supplying the triangular current waveform to the sensor pair. The part after the sensors will be called the detection part and is converting the sensor output into a digital form. The oscillator generates a triangular voltage waveform at 8 kHz, which is converted to a current waveform of 96 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

10 mA peak-to-peak by means of a V/I converter. The voltage follower between the oscillator and the VI converter acts as an impedance trans- former between these two blocks. DfT blocks are included between these blocks so that each of them is independently testable. The inputs and the outputs of the sensors are time-multiplexed and hence the oscillator can be shared. The pulse detector converts the pulses into a binary sequence. Details about the functioning of these blocks can be found in [Diem96]. The DfT blocks seen in the block diagram of the analog part have the same structure as the multiplexer DfT block in figure 4.5 and are switched between test and normal operation modes by means of the control signals generated as described in section 4.7.1. Most of the functionality of the compass watch is delivered by the digi- tal blocks. To mention these at the block level will be sufficient in this context, since the primary goal is looking at the design evaluation of the analog blocks. The blocks pulse count and arctan perform the compass function. Pulse count computes the x and y field components from the binary sequence, while the block arctan generates arctan(x/y) to calculate the direction of the magnetic field. There is also a watch block performing the normal wrist watch functions. The output is sent via a switch to the seven-segment display driver, which shows either direction or time data depending on the selection. During the design steps, the flow described in figure 4.2 has been applied for analog blocks. For digital blocks, the normal digital CAD flow has been followed, using VHDL-based synthesis and scan insertion. The analog and digital test patterns have been generated separately and applied to the IC in the test environment as described in the previous chapters. In the following sections, the test libraries created for the analog macros and test results for both analog and digital blocks are discussed.

4.8.2 Analog Functionality and the Test Library

The overall functionality of the analog blocks has been explained in the previous section. Here the per-macro specifications and the test libraries developed to measure these specifications will be discussed. Four macro types can be identified in the analog excitation and detection parts: Oscillator, buffer, comparator and V/I converter. The oscillator can 4.8. AN EXAMPLE: DESIGN AND TEST OF A COMPASS WATCH 97 be tested in a relatively simple way by observing its output, so no discussion of oscillator tests have been included. For other macros, the general block behavior is taken as basis for specification-based tests.

In figure 4.14, the analog part of the IC is given with the typical signals that will be found during operation. These would be the signals to check during a normal functional-test session. In that case, only being able to observe the relevant output lines would be sufficient. However, especially in case of design evaluation, it is very valuable to know precisely which macro does not perform as good as expected and it can be necessary to check this at various input conditions. For example, when it is observed that the output of the buffer is correct but the current at the sensors is too low, it may be needed to check whether this is only a problem due to too low transconductance, or an early saturation at the V/I converter output. Another advantage of making each block separately testable is to be able to test possible block types by means of emulating various behaviors. This has not been done for this demonstrator but it is possible in general for static or low-speed measurements. By means of accessing inputs and outputs of all blocks, any of them can be replaced by a behavioral model of an alternative design and the measurements can be carried out in the same manner as with the original block.

examplemacro

testInBuf testBufOsc testInVI testOscVI current y_sensor source(V/I) oscillator analogue buffer switch pulse DfT + DfT digital block - block detect signals detector testOutOsc x_sensor testOutBuf control selectXorY

v(Volts) v(Volts) v(Volts) 3.5 v(Volts) I(mA) 5

2.5 3.5 5.5 0 1.5 2.5 0 0 t(ms)

0 1.5 -5.5 0.125 0.25 t(ms) 0 0.125 0.25 0.125 0.25 t(ms) 0.125 0.25 t(ms) t(ms)

Figure 4.14: Typical signals in the analog part of the compass watch 98 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

Macro Design parameter Min Typ Max Unit Oscillator Output signal frequency - 8 - kHz Output signal amplitude - 1 - V Output DC level - 2.5 - V Duty cycle - 50% - - Non-linearity - 10% - - Buffer Closed-loop gain - 1 - - Closed-loop phase shift - 0 0.1 rad Unity gain bandwidth 8.5 10 - kHz Output saturation voltage 3.5 4 - V Output offset voltage - 0 0.01 V Comparator1 Upper triggering voltage - 3.05 - V (output cmax) Lower triggering voltage - 2.9 - V Output HIGH voltage - 5 - V Output LOW voltage - 0 - V Comparator2 Upper triggering voltage - 2 - V (output cmin) Lower triggering voltage - 2.05 - V Output HIGH voltage - 5 - V Output LOW voltage - 0 - V V/I converter Linearity @ 5V peak - 8% 10% - Output signal DC level - 0 - mA Transconductance 5 5.5 - mA/V

Table 4.1: Design parameter values of the analog macros in the compass watch

In table 4.1, the functional specifications of each analog block in the com- pass watch are given. Where no min or max values are given, a tolerance interval of 5% is assumed. If only a minimum or maximum value is given, then that± value alone is checked during testing. The test libraries have been made such that there is at least one test to verify each specification, and in some cases more tests. In the next paragraphs, the definitions of parameters and test configurations are given for two example blocks. The buffer block is an operational amplifier macro connected with unity feedback. It is difficult to generalize an operational amplifier block as a macro, since many types of operational amplifiers are found with the ac- cent on different parameters depending on their application (such as high speed, low power, precision, etc.). However, be it an operational ampli- 4.8. AN EXAMPLE: DESIGN AND TEST OF A COMPASS WATCH 99

Design Parameter Test Configuration Closed-loop gain frequency responce, dc sweep Closed-loop phase shift frequency response Unity gain bandwidth frequency response Output saturation voltage dc sweep Output offset voltage offset, dc sweep

Table 4.2: Operational amplifier parameters versus tests fier or another block, it is always possible to draw a general qualitative and quantitative contour of a block which is used for some specific class of product. For example, high speed low harmonic distortion operational amplifiers for audio applications. The evaluation test of an operational am- plifier can consist of the basic DC and AC characterization tests as well as the verification of more complex design parameters such as total harmonic distortion. In some cases, the evaluation tests are performed at a higher block level, such as an ADC of which the operational amplifier is a sub block. In the case of the compass watch functionality, the gain, bandwidth, the limits of the linear region are important to guarantee that the triangular wave from the oscillator will not be distorted. Also the offset should be very close to zero since any offset at the sensor inputs will result in a correspond- ing faulty value added to or substracted from the signal representing the earth magnetic field. An example test library has been developed for these critical parameters, for which the parameter names and the corresponding tests are given in table 4.2. The frequency domain and DC sweep simulation results of the operational amplifier (simulated as a buffer) are given in figures 4.15, 4.16 and 4.17. The test results corresponding to these simulations are presented in Appendix C and will be discussed in the coming sections. At the test plan generation phase, the frequency interval of the first simulation has also been used to generate the design parameter unity gain bandwidth. This kind of standard simulation results can be used for the same purpose by inclusion of simple scripts in the MISMATCH framework. 100 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

buffer netlist

1

950m

900m

850m

800m

750m

700m

650m

600m

550m Result (lin)

500m

450m

400m

350m

300m

250m

200m

150m

100m

100 1k 10k 100k 1x 10x 100x 10 Frequency (log) (HERTZ) 14:06:04 MET, 02/15/2000

Figure 4.15: Result of magnitude response simulation for the buffer

buffer netlist

0

-5

-10

-15

-20

-25

-30

-35

-40 Result (lin) -45

-50

-55

-60

-65

-70

-75

-80

100 1k 10k 100k 1x 10x 100x 10 Frequency (log) (HERTZ) 14:10:27 MET, 02/15/2000

Figure 4.16: Result of phase response simulation of the buffer 4.8. AN EXAMPLE: DESIGN AND TEST OF A COMPASS WATCH 101

The development of test configurations for the operational amplifier are de- scribed in detail in [Nee 96] and [Wess96]. Three tests have been developed for this macro: frequency response, DC sweep, and offset tests. Gain (in this case closed-loop) and bandwidth can be obtained from the frequency response test. The DC sweep test verifies the DC transfer characteristics of the operational amplifier by sweeping the input region as specified by the designer. The saturation voltage can be observed in this way, and any other problems with the DC behavior can be detected from the graphical result of this test. The offset behavior can also be globally observed from this test, but when more accuracy is needed, the offset test can be used to measure the DC output offset given a certain input region.

buffer netlist

5

4.8

4.6

4.4

4.2

4

3.8

3.6

3.4

3.2

3

2.8

Voltages (lin) 2.6

2.4

2.2

2

1.8

1.6

1.4

1.2

1

800m

600m

0 500m 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 Voltage X (lin) (VOLTS)

Figure 4.17: Result of DC sweep simulation of the buffer 102 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

Tests have also been developed for other analog macros of the compass watch. Another example is the comparator macro. In the analog detection part of the compass watch, two comparators in combination with a multi- plexer have been used. This is shown in the diagram of the detection part given in figure 4.18. The two digital outputs indicate whether the voltage signal from the sensors is above or below a given threshold; hence the time instances of the peaks of the sensor waveform can be detected. The first comparator has a threshold voltage of 3V, and the second 2V. The outputs of the two comparators are inverted such that the output cmax becomes 1 when the signal at the common input of the comparators assumes a value larger than 3V, and cmin becomes 1 when this input signal assumes a value smaller than 2V. This is the application of the pulse position principle as described in section 4.8.1. In this case comparators are used to calculate the positions of the peaks of the signal induced by the magnetic sensors.

refmax Vout cmax x_on simple_opamp

in_x ref1V

in_y ref1V ref1V cmin Vout simple_opamp

refmin

Figure 4.18: Detection part of the compass watch

The description of performance parameters for a comparator are given in figure 4.19. For correct operation in the compass watch, the accuracy of upper and lower triggering voltages have to be high. This is because any error in these values directly causes an error in the position of the peak pulse calculated from cmin and cmax. When this error is significant, the calculated values will not be the correct direction values. The comparator delay time is known to be insignificant for this application, since the sensor 4.8. AN EXAMPLE: DESIGN AND TEST OF A COMPASS WATCH 103 signal is known to be low-frequency (8 kHz) so the delay is insignificant compared to a period of this signal.

Vinput

upper trigger voltage lower trigger voltage

t

Voutput output HIGH voltage

output LOW voltage t Figure 4.19: Comparator design parameters

A simulation result demonstrating the functioning of the detection part is given in figure 4.20. The upper and lower triggering voltages for both comparators are indicated in this figure. It can be seen that these values are in agreement with the specifications given in table 4.1. Various tests applying parts of the signal given in figure 4.19 have been developed for the comparator macro. These tests will not be explained in more detail here since the operational amplifier is taken as the example macro for testing of the compass watch IC. 104 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

upper trigger voltage

lower trigger voltage

upper trigger voltage

lower trigger voltage

Figure 4.20: Simulation results of the detection part (see figure 4.18 for signal names). The upper and lower trigger values of comparator1 are shown as example. 4.9. TEST RESULTS 105

4.9 Test Results

4.9.1 Digital Parts

The first example digital block is comp2bin calc, which is a part of the pulse count block shown in figure 4.12. It is a small sequential block with two flip-flops and its task is to convert the comparator output of the analog part into peak detection information in 2-bit binary form. This is the input for the computation of the angle value of the magnetic field direction. The diagram of comp2bin calc is given in figure 4.21.

scanmode SA 1x

scanin DA Flipflop 1

DB

comp2bin_specs_p_mksc Q s1

s1 s1_old s1_new CP QN

s0 s0_old s0_new CDN

cmin cmin SA 1x cmax cmax

DA Flipflop 2

DB

Q s0

cp CP QN scanout

CDN

CDN Figure 4.21: The digital block comp2bin calc

The test hardware that is used for digital testing (ATS-200 from IMS) is described in Appendix B. For applying the test patterns via the ATS-200, the IMS digital test software IMS screens was used. The test patterns have been generated in the ATPG tool PODAL [Vosm85] based on the PODEM algorithm [Goel81] for stuck-at faults. The obtained test statistics for this block are given in table 4.3. 106 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

Number of test patterns 11 Number of detectable faults 56 Fault coverage 100%

Table 4.3: Test statistics for block comp2bin calc

The second example is a larger digital block labelled acd2segment. The diagram of this block is shown in figure 4.22. Its task is to generate alarm and clock data and convert them, together with the angle data, into a set of 7-segment display codes. The 7-segment outputs show the value of the direction angle, alarm or clock data as selected by the user.

28 flip flops internally alarm

alarm ampm ampm

h3 h3

h2 h2

h1 h1

h0 h0

m6 m6

m5 m5

m4 m4

d[8:0] m3 m3 d[8:0] m2 m2

m1 m1

m0 m0 hou_MSB[5:0] hou_MSB[5:0]

hou_LSB[6:0] hou_LSB[6:0]

set_h s5 s5 set_h alarm_clock_data

set_m set_m s4 s4 min_MSB[6:0] min_MSB[6:0] hms2segment

s3 s3 min_LSB[6:0] min_LSB[6:0]

cdn cdn s2 s2

scanmode scanmode s1 s1 sec_MSB[6:0] sec_MSB[6:0]

cp cp s0 s0 sec_LSB[6:0] sec_LSB[6:0] set_alarm compass_digi cp_scan cp_scan

scan_in scan_in scan_out set_time set_alarm compass_digi scan_out

set_time set_alarm compass_dig

Alarm, clock and data to segment Figure 4.22: The digital block acd2segment

The design of this block had some redundancy as appeared during test pattern generation. As a result, out of a total of 4380 faults, 5 are unde- tectable. For this reason it was not possible to have 100% fault coverage in this case. No errors were detected when the generated test patterns were applied to the IC. The test statistics for this block are given in table 4.3. 4.9. TEST RESULTS 107

Number of test patterns 409 Number of detectable faults 4380 Number of undetectable faults 5 Fault coverage 99.9%

Table 4.4: Test statistics for block acd2segment

Apart from this structural test, also a functional test was carried out for testing the clock and direction calculation functions. These tests also showed that the circuit works correctly.

4.9.2 Analog Parts

The block diagram of the analog excitation part of the compass watch is given in figure 4.23. This part consists of the oscillator, the buffer, and the two V/I converters. The operation of the excitation block has been explained in section 4.8.1. The diagram of the detection part was given in figure 4.18. The procedure described in the previous sections and the associated test libraries have been used for the test plan generation of these parts. Here, the voltage follower between the oscillator and the V/I converter is taken as example macro. The reasons for taking this macro are that it is a very general block and it is possible to apply various time and frequency domain tests. The design parameters unity gain bandwidth, output saturation voltage and the output offset were selected as being crucial by the designer of the compass watch. To cover these design parameters, three test configurations have been selected. These are the frequency response, dc sweep and offset tests. The tests have been instantiated using the data supplied by the designer. The resulting test configuration instances have been sequenced, and control signal generation and routing commands have been added. The obtained test plan can be seen in figure 4.24 as it is observed by the test engineer. In this way, the resulting order of tests and the test parameters of the selected tests can be seen and modified if necessary. Note that the control signal generation (line 0 in the test plan in figure 4.24) and routing routines ( lines 1,2,4,5,7,8 in the test plan in figure 4.24) are automatically 108 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

testBufOsc

st_out testInBuf testBufVI testInVI Vin_p resistor_in Oscillator vout Opamp Vout voutp_x vin_px V/Iconverter balref_x

vin_nx voutn_x

resistor_out x_on testOutBuf testOutOsc

dis_osc ref2p5V voutp_y

vin_py V/Iconverter balref_y vin_ny voutn_y

y_on

Figure 4.23: Excitation part of the compass watch

included in the test plan. The test program generated by the mismatch generate VI has been applied to the compass watch IC. The test results shown in Appendix C confirm that all the parameters lie within the ranges specified in table 4.1.

4.10 Discussion of Experiences

The reason for the design and test process of the compass watch was to verify that specific design and more particular test ideas can be applied on a real mixed-signal IC. The most important issues that have been evaluated are:

decrease in the estimated test development time, • decrease in the estimated test debugging effort, • ease of setting up, extending and maintaining design and test databases • to support MISMATCH,

any problems arising from the understanding/use of design/test pa- • rameters by the designer and the test engineer. 4.10. DISCUSSION OF EXPERIENCES 109

Figure 4.24: Test plan generated for the buffer macro

The first two points must be compared to a hypothetical case where the same test configurations are used and reused in a ‘copy-and-paste’ manner, as is done in most industrial test development work. Therefore, the gain in the test development time will not be in rewriting the same test configura- tions, but rather in choosing test configurations and putting them together according to the designer-based specifications and adding the tester setup data. The gain from these functions will depend on the complexity of the tester. The experience from this work has shown that the time gains are not substantial on a tester such as the one described here. To give an example, the time it would take to set up the test plan given a list of specifications from the designer and the DUT structure would be around a few hours. For more complex test systems where the number of test modules is higher and the interconnections with routing elements are more complicated, the gain in development time can be more. 110 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

A more significant time gain can be expected from test debugging time. Again, the IC at hand is not a fine example of a complicated design. It became clear during testing activities that errors in setting up the system such as forgetting to run the power supply routines or not generating all needed control signals for a specific macro and thereby not being able to obtain measurements occurred much more often when these operations were performed manually. It is expected that this effect will be much more visible in very complicated systems. Unfortunately, test debugging time is not very easy to estimate and the usefulness of a system such as MISMATCH can be better verified after regular use as an industrial verification environment. As to the ease of maintaining the design and test databases, part of the databases already exist in design and test environments, such as test con- figurations and DfT blocks. The difference that MISMATCH brings is that with every test configuration added, there has to be test information and test module connection files supplied. The time spent for these tasks (in this case about ten minutes for one of the example test configurations used for the compass watch) is negligible. Another issue that requires regular attention and checking is the maintenance of ‘design parameter-test’ links and the DfT library. These last two tasks require regular consultation between the designer and the test engineer to keep the system running. However, in none of the databases to be maintained there is a significant time investment required to keep MISMATCH functional.

4.11 Conclusions

The structure and operation of an integrated design-test framework has been described. If the results from the demonstrator design and test should be summarized, it can be concluded that the philosophy of MISMATCH is applicable to industrial verification test of mixed-signal IC’s with a poten- tial gain of test generation and debugging time. Furthermore, the reuse of blocks is supported here at both the design and test environments, which has more advantages beyond saving time for the rewriting of tests: in such a framework, the test plan can be adapted much faster when required by local changes or additions in the design as well as possible change of testing strategy on a single macro. The test plan generation is completely auto- mated, although the designer and the test engineer have complete overview 4.11. CONCLUSIONS 111 and control over the final test plan. In the case that a test plan is modi- fied manually, the tester settings and control signals can be adapted to the selected test set automatically. An essential point to be stressed about the compass watch example is that it does not take into account the reuse aspect of the MISMATCH func- tionality. The philosophy of MISMATCH is based on the repetition of similar types of blocks in subsequent designs and use over a longer period of time. In this case, such a verification has not been possible because of practical reasons, although issues such as maintainability of the framework and the use of generic tests promise good applicability to the extent of our observations. As to the portability of MISMATCH software to other test platforms, us- ing the software on a similar VXI-based test platform will not give much problems. However, the instrument drivers must be replaced and modifica- tions may be necessary in the routing algorithms. On other test platforms other modifications may be necessary, however, the system structure and the selection, configuration, and combining of test configurations remains essentially the same. The framework described here must be seen merely as one application of a specific design and test philosophy, independent of the design or test platforms on which it can be applied. Finally, something has to be said on whether there are limitations on the IC structure so that it can be tested ‘the MISMATCH way’. The only requirements are that the circuit has to have a hierarchical structure and DfT structures must be placed between each block that has to be tested separately. There is no limitation with respect to the type of the DfT block, as long as the DfT model is supplied in a standard description as explained in this chapter. There is also no limitation to the size of the blocks chosen as macros, although such a choice naturally effects the level of detail that can be attained at the test results. The example macro used in the previous sections would normally be too small to be taken as a functional testing unit in an industrial design, however, the test principles and the procedures will remain similar in the case of a larger macro. 112 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK

4.12 Bibliography

[Coud95] O.Coudert and J.C. Madre, “New Ideas for Solving Covering Problems,” in Proceedings of 32nd Design Automation Confer- ence, 1995, pp. 641-646.

[Coud96] O. Coudert; “On Solving Covering Problems,” in Proceedings of 33rd Design Automation Conference, Las Vegas, USA, June 1996, pp. 197-202.

[Diem96] G. Diemel, Design of a Compass Wrist-watch, Master of Science Report, Report No: 060.1562, University of Twente, Faculty of Electrical Engineering, 1996.

[Engi97a] N. Engin, R. Tangelder, H. Kerkhoff and H. Speek, Test and evaluation of test strategy, DfT structures and test signals in demonstrator # 1, Deliverables B1.D17 & C3.D19, ESPRIT re- port BLR Project # 8820 - AMATIST (Analog & Mixed-signal Advanced Test for Improving System-level Testability). August 1997.

[Goel81] P. Goel, “An Implicit Enumeration Algorithm to Generate Tests for Combinational Circuits,” in IEEE Transactions on Comput- ers, Vol. C-30, March 1981, pp. 215-222.

[Gros97] T. Grossman and A. Wool, “Computational Experience with Ap- proximation Algorithms for the Set Covering Problem,” in Euro- pean Journal of Operational Research, Vol. 101 (1997), pp. 81-92.

[Inte95] IntegraTEST Series 30 Product Guide, MicroLEX Systems A/S, Hoersholm, Denmark, 1995.

[Kerk97] H. Kerkhoff, R. Tangelder, H. Speek and N. Engin, ESPRIT BLR Project #8820 AMATIST Deliverable C1.D9, MESA Research Institute-University of Twente, May 1997.

[Mari97] E.J. Marinissen and M. Lousberg, “Macro Test: A Liberal Test Approach for Embedded Reusable Cores,” in Proceedings of 1 st International Workshop on Testing Embedded Core-Based Sys- tems, paper 1.2, 1997. 4.12. BIBLIOGRAPHY 113

[Nee 96] K. van Nee and P.J. Bouma, Development of Virtual Test In- struments, Report No:060.2045, University of Twente, Faculty of Electrical Engineering, April 1997.

[Sequ97] SequenTEST Version 2.2 User’s Guide, MicroLEX Systems A/S, Hoersholm, Denmark, March 1997.

[Soma90] M. Soma, “A Design-for-Test Methodology for Active Analog Filters,” in Proc. International Test Conference, 1990, pp. 183- 192.

[Tang97] R. Tangelder, G. Diemel and H. Kerkhoff, “Smart Sensor System Application: An Integrated Compass,” in Proc. European Design and Test Conference, March 1997, Paris, France, pp. 195-199.

[Vosm85] D. Vosman, “Automatic Test Pattern Generation for Integrated Digital Circuits,” Report, University of Twente, Faculty of Elec- trical Engineering, November 1985.

[Wess96] J.M. Wesselink and E. Yilmaz, Integration of a Test and Design System Using LabVIEW and Cadence, University of Twente, Re- port No:060.1647, June 1996. 114 CHAPTER 4. MISMATCH: A FRAMEWORK FOR DESIGN-TEST LINK Chapter 5

Defect-Oriented Test Evaluation for Analog Blocks

5.1 Introduction

In the previous chapter, an environment for the integration of design and test processes has been described. The main accent has so far been on the design evaluation of analog and mixed-signal blocks, and therefore it has been assumed that measuring key design parameters of a macro is sufficient. This is, however, only part of the reality. Although functional testing still has an important place in the test of analog and mixed-signal parts of an IC, when a high IC quality is required, testing can not be considered only at the specification level. The physical structure of an IC and the manufacturing process have direct consequences with regard to the occurrence of the various defect mechanisms. In design evaluation, testing the correctness and the performance of the design also play a role. In all the remaining stages of production testing, however, the defect mechanisms are the main cause of IC failures. This is why a closer look at the link between defect mechanisms and the analog tests is an indispensable aspect of bringing design and test together. The subject of this chapter is to find a measure for the effectiveness of tests. To what extent does the measuring of design specifications ensure a fault- free product? Is it possible to simulate the tests on hypothetical defective circuits to estimate the answer? Can this be part of a real design-test flow?

115 116 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

What are the obstacles and possible solutions? To find answers to these questions, the evaluation of analog tests on the basis of possible defects will discussed. The challenges and the current state of the art in the assessment of defect coverage for analog tests will be outlined.

5.2 General Test Selection Criteria

It has already been explained in chapter 2, that certain IC market require- ments (such as quality) have a direct relationship with the way the IC is tested. The main criteria for choosing the test procedure for an IC are:

test cost • test effectiveness. • The test cost is dependent on many factors, as also explained in section 2.4.2. Test effectiveness can be defined as the ability of a test to discriminate between a fault-free and a faulty circuit. In other words, an effective test is one that has minimal probability of causing testing errors. Effectiveness of a test depends primarily on the test signals and measurement methods used and the structure and specifications of the IC. In general, test effectiveness can be estimated in terms of specification coverage or fault coverage.

5.3 Specification Coverage vs. Fault Coverage

Specifications of an IC outline the functionality expected from the final product. The term specification is used to express different sets of parame- ters by the designer and the sales team. The specifications on the datasheet of an IC indicate the key features of the device, essentially the parameters that will be of interest when the device is placed on a board to form part of a high-level design. The set of specifications used by the designer (de- sign specifications) is, however, often a superset of the specifications on the datasheet for the first version of the IC. For the first prototype, a large amount or all of the design specifications are tested. The production test consists primarily of verifying the datasheet specifications. As new versions of the IC are released, the number of specifications tested decrease while the 5.3. SPECIFICATION COVERAGE VS. FAULT COVERAGE 117

Prototype test Design specifications

Production test Datasheet specifications

Figure 5.1: Relation between specifications and the test process

confidence in certain functions and specifications increases (see figure 5.1). An example of a design parameter which is often not explicitly specified or tested for is the phase margin of an operational amplifier. The specifications each macro has to satisfy so that the overall IC design specifications are met have been given the name design parameters, in the discussions of the previous chapters. For mixed-signal macros, measuring the most critical specifications is, and will continue to be at least in the predictable future, an important part of the mixed-signal production test practice [Csiz98]. These are the specifications that include improvements with respect to the previous release of the same product, or that are of key importance for the functioning of the final product. An example of such a specification can be the THD for a mixed-signal IC for audio applications [Csiz98]. In fact, the testing of these specifications verifies that the circuit performance is within the expected limits with respect to the measured specification and not that the circuit is defect-free. If there exists a defect which is either not dramatically effecting the measured parameter under the test conditions, then the IC will still pass the test. Specification coverage can be defined as the percentage of the key design specifications that are verified by the test. This, as a test selection criteria, is especially important for prototype test, since in this stage testing aims at detecting design errors as well as the errors from the production pro- cess. For example, it is required to evaluate the statistical distribution of design parameters by means of measurements on the batch of prototypes. The requirements in production test are quite different: First of all, the 118 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS yield problems related to process variations should be solved or at least minimized at the mass production phase [Xing98]. So during the mass pro- duction, the main cause of yield loss is identified as spot defects. The design errors also have to be detected and corrected at the prototype test stage. Hence, the detection of behavior deviating from specifications because of these is not an aim in production test. The measurement of certain speci- fications during production test is carried out because it is required by the customers and for monitoring the functional yield. The specification-based tests do detect a percentage of possible defects, however, experience from the industry shows that this percentage remains below the requirements for high-quality IC’s [Engi99a]. For digital IC’s, the fault coverage based on stuck-at fault model has long been a measure of how well a circuit is being tested. For analog circuits, however, the PPM requirements have only just in the recent years come to a point that requires improvements in test methods and makes the influ- ence of test on quality important. This has caused an increase of interest in analog fault models and defect-oriented testing in general, and this interest is also shared by a small part of the industry e.g., [Xing98], [Beur99]. The biggest obstacle against the usage of fault coverage as a measure of test effectiveness is the large CPU time needed to simulate the tests using stan- dard circuit simulators. The technical alternatives for solving this problem will be discussed in the following sections.

5.4 Manufacturing Defects and IC Faults

The main reason that makes production testing necessary is the existence of unpredictable and uncontrollable phenomena in different steps of the manufacturing process. Since all defects resulting from imperfections in the manufacturing process will form an inexhaustible list, it is of crucial importance that the most common defects are identified and corresponding fault models are used for evaluating the tests to be applied. To form a basis for fault simulation techniques, important concepts related to the types and modeling of manufacturing defects will be described in the coming sections. The most common causes of defects in manufacturing are human errors and equipment failures, process instabilities, material instabilities, substrate in- homogeneities and lithographical spot defects [Sach98]. In general, a defect 5.4. MANUFACTURING DEFECTS AND IC FAULTS 119 can be viewed as a geometrical or electrical defect. A geometrical defect is caused (among other factors) by mask alignment problems, litographical errors and spot defects. This kind of defect causes a die to have a different topology than the IC layout. An electrical defect, on the other hand, is re- lated to the electrical properties of the material on the wafer. An example of this is a transistor having a low threshold voltage because of poor tem- perature control during gate oxide growth [Sach98]. Both geometrical and electrical defects can have local or global effects. These effects can cause various types of faults, depending on the IC topology and the nature of the defect. In general, one can distinguish structural faults and performance faults in an IC. Faults can be categorized according to the effect of their modeling to the circuit topology:

Definition 5.1 A fault, which, when electrically modeled, causes a change in the circuit topology of the IC, is called a structural fault.

Definition 5.2 A fault that can not be modeled as a structural change of topology but changes the IC characteristics is called a parametric fault.

Another categorization of faults relies on the effect of a fault with regard to the circuit functionality:

Definition 5.3 A fault that prevents the circuit to perform (a part of) its function is called a hard (catastrophic) fault.

Definition 5.4 A fault that does not prevent the circuit to perform its function, but causes it to operate out of its specification range is called a soft fault.

It is a common mistake in the test literature to confuse the terms structural fault/hard fault and parametric fault/soft fault. Some researchers use these terms interchangeably. Although it is true that parametric faults are often soft and some structural faults cause the circuit to malfunction, this is not true in general. The terms soft and hard fault give an idea about how 120 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS difficult a fault is to detect, and not whether the fault changes the circuit topology. Many faults which change the circuit topology do not cause the circuit to malfunction completely. In general, global defects are considered more likely to cause paramet- ric faults, while the local defects will more often cause structural faults [Maly86]. For example, a temperature control problem such as the one mentioned above can effect the whole wafer (thus constituting a global de- fect); however, it is not very likely that the effect will be sufficiently large to change the functioning of the circuit. On the other hand, if this effect is large enough to change the functionality (i.e. to dictate itself as a hard fault) then this change will be quite radical so that it will be quickly ob- served before or during the initial phase of test process. In general, it is agreed that extremely large errors will either be noticed at the foundry before the batches ever make their way to the test site, or their effects will be large enough to be tested by almost any test input. For this reason, the main challenge in the defect-oriented testing of ICs lies in the detection of local parametric and structural faults. Another fact confirmed by the industrial experience is the large amount of local faults among the identified common IC faults. According to Stapper [Stap95], the most common defect causes are lithographic defects. These defects include the missing material and extra material defects for each pho- tolithographic layer. In a recent study made on IC’s from various CMOS, BICMOS and Bipolar processes from the Philips manufacturing sites in Ni- jmegen, The Netherlands, particle and patterning defects are shown to be the dominant causes among those IC failures of which the causes could be identified [Pol 96]. Other experiences in IC manufacturing also show that for well-centered designs with mature processes, particle contamination is the main cause of IC failure [Xing98]. The discussion from this point on will be limited to particle defects, and in particular to the bridging faults which are the result of conducting particle defects. 5.5. LAYOUT BASED FAULT LIST EXTRACTION 121

5.5 Layout Based Fault List Extraction

The modeling of manufacturing faults makes it possible to insert a fault model in the circuit schematic, run a simulation with a test signal as input and check whether the output varies from the output of the fault-free cir- cuit, i.e. whether the fault will be detected by the test in question. However, realistic modeling and estimation of the IC yield requires also knowledge on the probability of faults. Furthermore, when it is known with which probability a fault will occur, the task of selecting a subset of all possible faults becomes easier. This is necessary since time and resources often do not allow the simulation of all faults. The process of obtaining a weighted fault list based on the process statistics and the IC layout is called fault extraction. The probability of occurrence of a particle fault depends on the layout ge- ometry and the characteristics of the process step related to the specific layer(s) where the fault occurs. To give an example, figure 5.2(a) shows a two-dimensional sketch of a conducting particle between two metal lines. For the sake of simplicity, the defect is assumed to be circular, which of course does not have to be the case in reality. This defect can be mod- eled as a bridging fault of resistance Rf , as seen in figure 5.2(b). In order to calculate the probability of the fault, the defect size distribution of the manufacturing process has to be known. Various models exist for the defect size distribution [Moor87], however the most common one has been intro- duced by Stapper [Stap83]. This linear-power law distribution, denoting the fault probability versus fault diameter, is shown in figure 5.3. Defect size distribution functions for various defect types (such as open, short and pinhole) and various process layers (metal, poly) can be obtained for a given manufacturing process [Gyve91]. In the distribution shown in figure 5.3 the parameter x0 corresponds to the most common defect radius and the parameter p to the probability of having a defect with radius x0. For a mature production line p is about 4 to 5 [Zano96], and p = 3 is standard value in many layout-based analysis tools such as VLASIC [Moor87].

5.5.1 Critical Area Definition 5.5 Critical area due to a defect is defined as the area of the region on the die, with the property that if the center of the defect is located at a point in this region, it will cause a fault. 122 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

In this definition, a defect is assumed to have circular/spherical shape. The probability of a fault can be calculated as the critical area due to the fault divided by the total area of the IC. Critical area due to a specific fault is dependent on the layout geometry as well as the defect size distribution corresponding to the related layer(s) and defect types. As an example, in figure 5.2, for a particle size smaller than the distance between the metal lines the probability of a fault is zero. In the example of figure 5.2(c), the critical area between the lines due to a defect diameter of 3s/2 can be seen. When a conducting particle is located between the metal lines with its center point corresponding to this area, a bridging fault occurs.

critical area y y l l line_1 line_2 line_1 line_2 line_1 line_2

r Rf

0 s0 s x 0 s/4 3s/4 s x (a) (b) (c) Figure 5.2: Example of (a) a defect, (b) its bridging fault model, and (c) critical area

5.5.2 Fault Probability Calculations

In order to calculate the probability of a fault, a software tool that can calculate/estimate the critical area has to be used. Earlier fault extraction tools such as VLASIC [Moor87] use Monte Carlo simulations to estimate critical area for this purpose. This method is referred to as Inductive Fault Analysis (IFA). More recent tools for fault extraction [Xing98] and yield prediction [Gyve91] make use of analytical critical area calculations because of its higher accuracy and the lower required CPU time. 5.5. LAYOUT BASED FAULT LIST EXTRACTION 123

Figure 5.3: Defect size distribution function

If we turn back to the example of figure 5.2, the probability that a bridging fault will occur between the two lines can be obtained as the sum of proba- bilities calculated for all possible defect radii. This sum can be represented with the integral:

max P (f) = Z A(r) p(r) dr (5.1) min · · where A(r) denotes the critical area size corresponding to the defect radius and p(r) the value of fault probability for radius r. P (f) is the probability that a bridging fault occurs between lines l1 and l2 and min and max are the minimum and maximum defect radii [Gyve91]. When the probabilities P (f) are calculated for all possible bridging faults, a fault list with corresponding probabilities is obtained. The probabilities can be then used to assign a weight to each fault fi, given by

P (fi) wi = (5.2) P (fk) Pk where wi is the weight of fault fi and denotes summation for all k. In Pk this case, the weighted fault coverage is calculated based on the weighted 124 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS sum of detected faults, as

w f X k k k FCweighted = (5.3) w X k k where wk is the weight of fault k, fk is a coefficient equal to 1 if fault k is detected and 0 otherwise, and FCweighted is the weighted fault coverage. To summmarize, techniques such as fault extraction and fault simulation make it possible to get an estimation of test signal effectiveness and IC quality. However, it is necessary to emphasize a few points that should be handled with care: First of all, the fault coverage as explained in this section is dependent on the modeling of faults. If the IC quality is to be estimated based on the defined fault coverage, fault modeling has to be realistic and all common fault types have to be covered. It should also be remembered that the fault coverage as defined here (based on spot defects) will be less important for IC quality when the manufacturing process is not mature and too much process variations are involved. Another important point is the required precision in determining the defect size distribution for a manufacturing process. In practice, the defect sizes are often measured by using defect monitors on wafers which have comb- like structures [Brul91]. The accuracy of the obtained distribution has a direct effect on how realistic the fault coverage figures are.

5.6 Analog Fault Simulation as a Test Evaluation Method

The relationships described in the previous sections form a basis for evalu- ating a test procedure. With the fault models at hand, the next challenge will be to see whether the simulation steps required to simulate the cir- cuit with the fault models injected (fault simulation) is computationally feasible. Experience from the practitioners of fault simulation shows that the fault simulation of analog circuits is a computationally intensive task taking very long CPU time for medium and large analog blocks [Xing98]. An example of this simulation overhead can be the functional test of a PLL 5.7. PROBLEM DEFINITION 125

[Harv94]. The simulation of such a test requires a transient simulation, which takes in the order of hours, exact time depending on the machine and the simulation length. Assuming that one simulation takes 2 hours, simulating even a fault list with a modest size of 200 faults would take 400 hours, i.e. more than two weeks! Since it is often practically unfeasible to simulate all possible modeled de- fects, realistic limitations are often defined to truncate the fault list. One possible method is ordering all the faults according to their weights, and taking the set of higher-weighted faults corresponding to a given percentage of all the fault weights as the fault list. This total weight percentage can be also dictated by the desired quality figure, making a trade-off between the CPU time and IC quality necessary [Beur99]. Another common approach is assuming that only one fault at a time will be present in the die (single fault assumption). Single fault assumption has been already used for the defect-oriented testing of digital circuits and there exists general agreement [Pate98] that this assumption does not lead to severe inaccuracies in deter- mining the efffectiveness of a test set. However, for analog circuits, even this assumption is not sufficient to make the simulation time sufficiently short. As a result, analog fault simulation is often applied with truncated fault lists and limited number of fault models, which decreases its reliability as a realistic test evaluation method. Decreasing the complexity of analog fault simulation is one of the important requirements for increasing the test effectiveness and consequently the IC quality. In the rest of this chapter, the formal definition of the analog fault simulation problem and an analysis of the existing methods for efficient fault simulation will be presented. This discussion is meant to form a basis for the development of a new fault simulation concept, the implementation of which will be described in chapter 6.

5.7 Problem Definition

The fault simulation problem that we are addressing can be stated as fol- lows: Assume that a given nonlinear CUT (Circuit Under Test) has to be simulated for a list of NF faults. If the golden simulation takes time T , one can expect that with a standard circuit simulator the whole fault sim- ulation will take a time duration of approximately T (NF + 1). There are 126 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS several reasons why the necessity of having an optimized fault simulator arises:

The time T (NF +1) can be very long for a large circuit and/or a large • fault list, and hence become unpractical. The introduction of a fault can cause the set of equations to have nu- • merical convergence problems [Stra99] and/or become ill-conditioned, depending on the location of the fault. For the results to be fully trustable, the fault simulation tools should have features to check (and where possible compensate for) this kind of problems.

The objective of the rest of this chapter will be to investigate the possibili- ties for reducing the simulation time T (NF +1). In the following discussion of fault simulation methods, the transient simulation of an analog circuit including nonlinear devices is considered. For these circuits, the fault sim- ulation time is very high and the existing methods for speeding up the simulations are often not applicable. For simulation purposes, the circuit topology has to be converted into a set of equations. These equations can be obtained by means of modified nodal analysis (MNA). MNA is a method used in most standard circuit simulators. Details of this technique will not be explained here but can be found in texts about circuit simulation methods, such as [Call88] and [Sing86]. MNA transforms the circuit topology into a set of nonlinear dif- ferential equations, the unknowns of which are node voltages in the circuit and branch currents of certain elements. The simulation problem can then be expressed as solving this nonlinear set of equations,

f( ˙x(t), x(t), u(t)) = 0 (5.4) where x is the vector of unknowns (voltages and currents in the circuit), ˙x denotes the vector of time derivative of x, u denotes the input vector and f is the nonlinear vector function relating the node voltages and branch currents in x to the circuit structure. In this equation 0 denotes the zero vector, 0 n where n denotes the n-dimensional space. The simulation procedure< consists of< solving the equation above for each of the time points ti with ti[0,T ] where T is the given end time of the simulation. The steps between time points (i.e. time steps) can be predefined or variable. In the 5.8. STANDARD METHODS IN CIRCUIT SIMULATION 127

variable time step scheme, each point ti is selected (based on the variation of node voltage and the estimated local truncation error) after the solution is obtained for time point ti 1. In this case t1 has a default value determined by the user-defined time step.− Before proceeding to fault simulation methods, first the computational methods involved in a standard circuit simulator will be briefly described.

5.8 Standard Methods in Circuit Simulation

Since the well-known circuit simulation tool SPICE has been developed in Berkeley in the beginning of seventies [Nage75], many different methods and concepts have been introduced in this field. Most simulators which are commonly used nowadays are, however, still improved variants of SPICE. SPICE-like simulators are also applicable to a large class of circuits with ac- ceptable performance. For these reasons, this category of circuit simulators are taken as basis in most of the fault simulation research. A SPICE-like simulator is in general a software tool that depends on solving nonlinear equations such as 5.4 by using Newton-Raphson (NR) iterations and solving linearized equations by means of LU decomposition at each iteration. In figure 5.4 the simulation flow common to all SPICE-like sim- ulators is given for the transient simulation of a nonlinear circuit. As seen in figure 5.4, the simulation moves along the time axis calculating the solution at every time point (denoted by ti in box 2 in the figure). The time point solution is obtained by means of NR iterations. The equations obtained after each simulation step is indicated next to the dark grey nodes in the flowchart of figure 5.4. It can be seen that a NR iteration consists of numerical integration, linearization and LU decomposition steps after which convergence is checked. Numerical integration (see box 3 in figure 5.4) is used for converting the set of nonlinear differential equations to a set of nonlinear equations by replacing the differential elements with difference expressions. SPICE-like simulators use a standard numerical integration method such as backward Euler or trapezoidal integration. 128 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

STARTSTART

ModifiedModified Nodal Nodal 1 AnalysisAnalysis

f (x&(t), x(t),u(t)) = 0

2 N ti

Y

NumericalNumerical 3 NR Iterations IntegrationIntegration

F(x) = 0

BacktrackingBacktracking 8 LinearizationLinearization 4

J ×Dx = -F

LULU decomposition decomposition 5 9

6 N Con-Con- verged?verged?

Y

7 SelectSelect time time step: step: ti¬ ti +hi ti¬ ti +hi

Figure 5.4: SPICE flowchart 5.8. STANDARD METHODS IN CIRCUIT SIMULATION 129

The set of nonlinear equations obtained at the numerical integration step is of the form

F(x) = 0 (5.5) where F is the function obtained from f in equation 5.4 by substituting numerical integration expressions for the differential terms resulting from energy storage elements. This equation is linearized (see box 4 in figure 5.4) around the previous iteration value. Assuming that the simulation is at the iteration k + 1, the solution of the previous iteration can be denoted (k) by x(ti) (iteration k for the solution of time point ti).

The linearization of equation 5.5 is based on the Taylor expansion of the (k) function F around x(ti) . When only the first order (i.e. linear) term of (k+1) this expansion is taken and f(x(ti) ) is set to zero the expression

(k+1) (k) (k) 1 (k) x(ti) = x(ti) [J(x(ti) )]− F(x(ti) ) (5.6) − n · o

(k) is obtained. In this expression, J(x(ti) ) denotes the Jacobian matrix (k) (k) of the function F evaluated at x(ti) . The vector J(x(ti) ) is the n- dimensional counterpart of the scalar differentiation. The elements of the general expression for J(x) are given by the derivatives of the vector func- tion F with respect to the variables in x, i.e.

∂F1(x) ∂F1(x) ∂F1(x) ... ∂F1(x)  ∂x1 ∂x2 ∂x3 ∂xn 

 ∂F2(x) ∂F2(x) ∂F2(x) ∂F2(x)   ...   ∂x1 ∂x2 ∂x3 ∂xn      J(x) =  ∂F3(x) ∂F3(x) ∂F3(x) ∂F3(x)  (5.7)  ∂x ∂x ∂x ... ∂x   1 2 3 n   ......     ......       ......   ∂Fn(x) ∂Fn(x) ∂Fn(x) ... ∂Fn(x)   ∂x1 ∂x2 ∂x3 ∂xn  130 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS where the vector function F is given by

F (x)  1  F2(x)    F3(x)    F(x) =  .  (5.8)    .       .     Fn(x) 

The equation 5.6 forms the basis of NR iterations. This equation has to (k+1) (k+1) be solved to calculate the new solution x(ti) . Denoting x(ti) with (k) xk+1 and x(ti) with xk, equation 5.6 can be written as

1 xk+1 = xk [J(xk)]− F(xk) (5.9) −  · Solving 5.9 is equivalent to solving the set of linear equations

J(xk) ∆xk = F(xk) (5.10) · − for ∆xk, where ∆xk = xk+1 xk. − A matrix equation solution technique has to be used for solving the linear set of equations 5.10. Because the direct inversion of J(xk) or elimination techniques such as Gaussian elimination require a large number of opera- tions, the (faster) LU decomposition method is used in standard simulators (see box 5 in figure 5.4). This method is based on solving a set of linear equations such as 5.10 by decomposing the matrix J(xk) into lower and upper triangular matrixes L and U, such that

J(xk) = L U (5.11) · This so-called LU decomposition converts the equation 5.10 into

L U ∆xk = F(xk) (5.12) · · − Naming

U ∆xk = z (5.13) · 5.8. STANDARD METHODS IN CIRCUIT SIMULATION 131 the folowing equation results:

L z = F(xk) (5.14) · − where

z1   z2    .  z =    .     .     zn 

The set of equations in 5.14 can be solved easily since L is a lower triangular matrix. The first equation in 5.14 (including uppermost rows of L and F(xk)) will readily yield the first unknown, z1. The obtained variable z1 is then inserted in the second equation to get z2 and so on, until zn is obtained. These operations for calculating the members of vector z are called forward substitutions. Once z is known, the equation 5.13 can be solved for ∆xk with similar iterations, this time starting from the last variable ∆xkn since U is an upper triangular matrix. These operations are in turn called backward substitutions. At the end of each NR iteration, convergence is checked (see box 6 in figure 5.4). The standard convergence check includes two comparisons, one between the values of elements of ∆xk 1 and the elements xk and − the other one between the values of elements of ∆F = F(xk) F(xk 1) − − and the elements F(xk). Convergence is established when the values of ∆xk 1 and ∆F are very small compared to the values of xk and F(xk). This− judgement is based on the voltage and current tolerance values which are selected by the user. More information on convergence check can be found in text books on circuit simulation, such as [Ogro94] and [Sale94]. If the maximum number of NR iterations is reached before convergence, backtracking is performed, i.e., the time step is made smaller and the NR iterations are repeated. If convergence is not reached after a predetermined number of backtracks the simulation is aborted. When a solution for the time point is found, the simulation is carried on for the next time point. If constant time step is used, all time points 132 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS are predefined by the time step selected by the user. When variable time steps are involved, the selection of the time point ti+1 and the time step hi = ti+1 ti (box 7 in figure 5.4) depends on the local truncation error − (LTE) corresponding to the result obtained for the time point ti. LTE is the difference between the exact solution and the solution reached at the end of NR iterations. It has been explained in the previous paragraphs that equation 5.6 is obtained from the Taylor series expansion of the nonlinear function F by taking the first order term only. When an NR iteration converges, the higher order terms of this expansion evaluated at the solution point make up the LTE. It is possible to estimate this number by taking a certain number of these terms. Details of this procedure can be found among others in [Nage75]. The steps described above are repeated for each time point until the end time T of the transient simulation is reached (box 2 in figure 5.4). A general overview of the transient simulation procedures typical to SPICE- like simulators has been presented. The methods used in these simulators for DC and AC simulations are mathematically speaking all parts or simpler special cases of this procedure. The DC simulation is equivalent to one time point of the time-domain simulation, while the AC simulation makes use of the same linear algebraic methods with a different equation construction to solve for the steady-state response of the circuit. More details on circuit simulators and SPICE can be found in good text books such as [Call88], [Sing86], [Chua75] and [Ogro94].

5.9 Simulation Complexity

Since a high CPU time is the most important drawback of analog fault sim- ulation, it becomes necessary to observe the main operations in the SPICE flow and look for ways of decreasing the total runtime of these operations for fault simulation conditions. A few basic concepts on computational complexity must first be introduced. For details of these concepts and the analysis of algorithms the reader is referred to [Bana91].

Definition 5.6 A function that estimates the runtime of an algorithm in terms of the size of its input is called the runtime complexity of the algo- rithm. 5.9. SIMULATION COMPLEXITY 133

One example of runtime complexity can be given as follows: Assume an algorithm that takes two square matrices as input and returns their product as output. In this case, the number of rows/columns of the matrices is the size of the input, since the runtime depends on this quantity. Assume that two n n matrices A and B are the inputs of this algorithm, and the output is C =×A B. Then the computation will require n multiplications and n 1 additions· for each element of C. Since C has n2 elements, the total number− of multiplications required will be n3 and the total number of additions required will be n2(n 1) = n3 n2. − − As well as runtime complexity, space complexity is also sometimes used for algorithms, denoting the maximum memory space occupied by the algo- rithm. Since the limiting factor for the simulators within the context of this and the following chapter is runtime, the term complexity will be used to denote the runtime complexity only.

Definition 5.7 Basic operations (such as addition, multiplication, com- parison) which make up building blocks of an algorithm are called unit operations.

In the example above, the scalar additions and multiplications performed on the elements of A and B are the unit operations of the example algorithm. The time complexity of an algorithm is determined in terms of the number of the performed unit operations of one kind. This is called the basic operation. The complexity is denoted by the order of the function that expresses the number of operations in terms of the size of the input. We shall call this function the cost function of the algorithm. To refer again to the example of matrix multiplication, the cost function of the matrix multiplication is n3, denoting the number of multiplications (unit operation for this algorithm) in terms of the input size n. In this case, the complexity of this algorithm becomes of the order of n3, denoted as Θ(n3). To give another example, an algorithm with the cost function f(n) = 3n4 + 2n3 x + 1 is of order n4 (f(n) = Θ(n4)). A formal treatment of the concepts− which have been summarized can be found in [Bana91]. For matrix algorithms, usually multiplications (i.e., multiplications and di- visions, also called long operations) are taken as the basic operation. One reason is that the order of multiplications is in most cases equal or greater 134 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS than the order of additions. Another reason is that multiplications and di- visions require much more computer time than additions and substractions. In the following sections, also the term floating point operations will be used as a measure of computational complexity. In this case, the given quantity is a weighted combination of the number of additions and multiplications, used as an estimation of CPU time. Having defined the basic concepts for time complexity, the fault simulation complexity of SPICE-like simulators will now be investigated. As seen in% figure 5.4, the three operations that are repeated in the simulation flow are

choosing the time step (see box 7 in figure 5.4), • performing numerical integration (see box 3 in figure 5.4), • running the NR (Newton-Raphson) iterations (see box 9 in figure • 5.4).

The first two operations have negligible runtime compared to the NR itera- tions, which account for most of the computational time. An NR iteration consists of the repetition of two actions, namely evaluating the component models, and solving the resulting linear system of equations. Thus the con- tribution of the NR iterations to the CPU time at each time point can be related with three factors:

evaluation of the device models, • solving the set of linear equations, • total number of NR iterations. • These factors will be discussed separately in terms of complexity in case of fault simulations.

5.9.1 Evaluation of Device Models

The complexity of the evaluation of the device models depends on the model level chosen. For more complex device models, the building of equations can require a large number of floating point operations. For example, one 5.9. SIMULATION COMPLEXITY 135 transistor model evaluation for the SPICE level 3 model requires around 200 floating point operations. Simulations for design verification often require a high accuracy and thus transistor models of high complexity. However, the simulation of certain tests can be done using higher level transistor models with less complexity. The measurement accuracy that is available at the tester can also be a guideline in determining the required simulation accuracy [Engi99a]. Since the possibility of using higher level component models depends completely on the design functionality, simulation environ- ment used and the test to be simulated, it is not realistic to suggest it as a general method for CPU time reduction.

5.9.2 Solution of Linear Equations

In SPICE-like simulators, the LU decomposition method is used in con- junction with sparse matrix techniques in order to keep the simulation com- plexity of solving equation 5.10 low. Sparse matrix techniques are based on performing no calculations on those matrix elements which are structural zeros, i.e. which are known to remain zero during the whole simulation because of the topology of the circuit to be simulated [Kund86]. In ta- ble 5.1, the operation complexity of the solution methods for linear sets of equations are shown. The calculations have been made based on an n n system matrix, and in the sparse case an average of p nonzero elements× per row/column is assumed. Inverting an n n matrix costs about n3 multiplications. It can be seen that the methods× given in the table have lower operational complexity than matrix inversion, and taking advantage of the sparsity of system matrix decreases the operation complexity fur- ther. SPICE-based simulators typically use sparse LU decomposition for the solution of linear equations. The expressions in table 5.1 will be used in the coming sections for the comparison of the complexity of linear equation solution methods for efficient fault simulation.

5.9.3 Number of NR Iterations

The number of iterations of an algorithm can be determined by examining the rate of convergence, in other words, the rate of decrease of the error of the solution at each iteration. Let xk be the solution of the NR algorithm 136 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

Step Multiplications Additions Complexity

n3 n n3 n2 5n Gaussian Elimination + n2 + + + Θ(n3) 3 3 3 2 6 n3 5n n3 5n LU Dec.(T) +n2 + n2 Θ(n3) 3 − 6 3 − 6 n2 n n2 n LU Dec.(FS) Θ(n2) 2 − 2 2 − 2 n2 n n2 n LU Dec.(BS) + Θ(n2) 2 2 2 − 2 p2n p2 p2n p2 p2 Sparse LU Dec.(F) + Θ(p2n) 3 − 3n 3 − 2 6n pn p pn p Sparse LU Dec.(FS) Θ(pn) 2 − 2 2 − 2 pn p pn p Sparse LU Dec.(BS) + Θ(pn) 2 2 2 − 2

Table 5.1: Complexity of algorithms for solving sets of linear equations (Dec.=decomposition, T=total, FS=forward substitutions, BS=backward substitutions, F=factorization)

at iteration k at time point ti. Denoting the exact solution at this time point by x∗ = x(ti), the error k at kth iteration can be defined as

k = x∗ xk (5.15) | − | With the iteration error defined in this manner, it can be shown [Chua75] that NR iteration has a quadratic rate of convergence, i.e. the iteration error of step k + 1 can be expressed in terms of the error at iteration k as 2 k+1 = c (k) (5.16) where c is a constant. As seen in equation 5.16, the condition 0 < 1 has to be satisfied for convergence. When the initial guess is inside the con- 5.10. FAULT SIMULATION: OVERVIEW OF EXISTING METHODS 137

vergence region of x∗, NR has a fast convergence because of this quadratic convergence rate. However, like the other similar iterative techniques, the NR algorithm has only local convergence. That is, an initial guess that lies outside the convergence region of the solution will cause the algorithm to diverge or to converge to a local minimum. Normally this does not form a problem for the transient simulation steps, since the solution of the previ- ous time point can be used as the initial guess and it is almost always very close to the final solution x∗. Besides, backtracking is possible to take a smaller time step when convergence can not be reached. When the transient simulation of faulty nonlinear circuits is considered, it is possible to decrease the number of NR iterations at a particular time step ti by making a better initial guess x0 for the solution x(ti). In order to provide a CPU decrease with respect to the standard circuit simulation, x0 has to be closer to the solution x∗ that the standard initial guess, x0 = x(ti 1), i.e. the condition −

x∗ x0 < x∗ x(ti 1) (5.17) − | − − | has to be satisfied. Satisfying this condition requires x0 to be very close to x∗: Because of the quadratic convergence of NR iterations, a factor of two improvement in the initial condition (i.e. x∗ x0 = x∗ x(ti 1) /2) will result only in an improvement of √2 in the| number− | of| iterations.− − This| convergence property of NR algorithm is visualized in figure 5.5. The con- sequence of this property for fault simulation is that an improved initial condition x0 has to be calculated with high accuracy in order to be able to decrease the number of NR iterations.

5.10 Fault Simulation: Overview of Existing Methods

The most challenging problem for defect-oriented test evaluation of analog macros is the transient fault simulation of nonlinear circuits. Transient simulation is the most CPU-intensive kind among the standard simulation types (DC, AC, and transient) that are presently interesting for fault simu- lation. In the discussion here, distinction will be made between linear and nonlinear circuits. A linear circuit has either only linear devices (i.e. linear 138 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

Error

einitial

eimproved

convergence

econvergence 0 kimproved Iterations

kinitial

Figure 5.5: Decrease in the number of NR iterations with an improved first guess resistors, capacitors, etc.) or components/subcircuits which are made up of nonlinear elements but behave in a linear way (i.e. an operational am- plifier) with the input concerned in both fault-free and faulty conditions. This is for instance the case when an operational amplifier has a soft fault that only changes the gain but does not affect the functioning of the circuit any further. As far as the linear circuits are concerned, the simulation of faults from scratch (i.e. NF + 1 circuit simulations for NF faults) does not pose a very difficult problem. The simulation times for these circuits are in general not extremely high and techniques for decreasing fault simulation time are more readily applicable. However, this is not the case for circuits with many nonlinear components. For these circuits, time-consuming NR iterations have to be used for calculating the operating point. This is a required step for all simulation types, which is done once for single DC and AC simulations. For transient simulations, on the other hand, NR iterations have to be repeated at each time point. Although analog fault simulation is a research area for more than ten years, 5.10. FAULT SIMULATION: OVERVIEW OF EXISTING METHODS 139 there is still no industrial fault simulation tool which uses special algo- rithms optimized for the time efficiency of simulating faults. The few ex- isting industrial tools [Xing98] are based on standard circuit simulators. On the other hand, the increasing quality demands for analog circuits make a defect-oriented test evaluation necessary. In the recent years, this has reflected in the amount of research done on fault simulation [Hou98], [Augu98], [Voor97b], [Zwol97], [Yang99]. A number of tools have been writ- ten to demonstrate various methods to increase time efficiency, although all have their limitations in terms of circuit functionality and simulation types. Now a brief overview will be given of these methods revealing their advantages and shortcomings.

5.10.1 Methods for Linear Circuits

Most of the analog circuits are actually designed to operate linearly. When it can be safely assumed that the faults injected into the circuit does not cause it to operate outside the linear region, the fault simulation can be simplified in many ways. One of these is using the Laplace transform of the linear transfer function as a functional representation of the faulty (sub) circuit. Examples of this approach are given in [Vari96] and [Voor97b]. The main idea used is that the difference in the system matrix will be linearly reflected in the difference in the circuit response. The general problem with this kind of methods is that the system simulated does not necessarily remain in the linear region especially in the case of catastrophic faults and even some parametric faults. Hence, it is not realistic to base a general simulation tool on this category of methods.

5.10.2 Methods for Solving Sets of Linear Equations Under Partial Coefficient Modifications

As explained previously, the solution of sets of linear equations are per- formed repeatedly during the simulation process. The algorithms used and their complexities have been listed in section 5.9.2. As far as fault simu- lation is concerned, the complexity of simulation depends not only on the circuit size (i.e., number of nodes, N), but also on the number of faults to 140 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS be simulated. The number of faults, in return, are in general proportional to the size of the circuit. For example, if the linear equations A x = b have to be solved once (say, for the DC simulation of a linear circuit)· the complexity is Θ(N 3). However, assuming that all the bridging faults in the circuit are simulated, this has to be repeated for each fault in a list N N 2 N of 2 = 2− faults. This increases the complexity of solving the linear equations  to Θ(N 5). Clearly, this observation is also valid for all simula- tion modes and circuit types, since the solution of linear equations is a basic part in all cases. Knowing that only a small part of A is changed for each fault, it is possible to solve the equations at smaller time costs. There are two methods that have been suggested for this, Householder’s method and reuse of the fault-free LU decomposition. Both methods will be described in the following paragraphs.

5.10.2.1 Householder’s Method

A well-known method used for calculating the inverses of matrices is based on a method called matrix inversion by modification introduced by House- holder [Hous64].

m m Theorem 5.1 (Householder) Given the matrix B × , vectors w, v m and scalars σ, τ such that σ 1 + τ 1 = vT w∈, < ∈ < ∈ < − − T 1 1 1 T 1 B σwv − = B− τB− wv B− (5.18) −  − where vT is the transpose of v.

The proof of this theorem and more theoretical details about matrix inver- sion by modification can be found in [Hous64]. In practice, the objective of using Householder’s formula in fault simulation is updating the Jacobian matrix J used in the NR iterations as given in equation 5.10. When the difference

∆J = J Jf (5.19) − between the Jacobian Jf of the faulty circuit and the Jacobians J of the fault-free circuit is limited to a small number of matrix elements, it is possible to use equation 5.18 for calculating the inverse of Jf from the 5.10. FAULT SIMULATION: OVERVIEW OF EXISTING METHODS 141

1 inverse of J at a lower computational cost than having to calculate Jf− directly or solve the linear set of equations by using LU decomposition. In general, this is possible by taking J = B and ∆J = σwvT in equation 5.18. In particular, if J and Jf differ from each other with only the cth column, i.e., if all the nonzero elements of ∆J lie on the cth column, then by taking zc = ∆J ec where ec is the cth unit vector and zc is the cth column of ∆J, we can write·

T Jf = J zce (5.20) − c in which formula 5.18 can directly be applied by taking σ = 1, w = zc and v = ec. On the other hand, if the nonzero entries of ∆J are limited only to row r,

T Jf = J erz (5.21) − r can be used for applying Householder’s formula, where zr is the transpose of the nonzero row of ∆J, i.e. zr = er ∆J where er is the rth unit vector. · While equation 5.18 is useful for inverting Jf by modification, it has been clear from the above examples that the process actually requires that the difference elements must be limited to a row or a column for the application of this formula. In the fault simulation of nonlinear circuits, however, this is often not the case. A more general version of equation 5.18 upon which the fault simulation methods are usually based has been given by Woodbury [Wood50] as

T 1 1 1 T 1 B USV − = B− B− UTV B− (5.22) −  − m m m l l l where B , U, V and the matrices S, T × satisfy ∈ < × ∈ < × ∈ < 1 1 T 1 T− + S− = V B− U (5.23)

All the involved symbols represent matrices and the superscript T repre- sents the transpose of a matrix. This version of the Householder’s theorem is also used in [Hou98] for the implementation of the fault simulation tool CONCERT. It is assumed that at a given time step ti, the difference be- tween the Jacobian matrix J from the last iteration of the fault-free circuit 142 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

k at the same time point, will not differ much from the Jacobian matrix Jf calculated for an iteration k of the fth faulty circuit. In this case,

∆J = Jk J (5.24) f − will have very few matrix elements which are larger than a predetermined threshold value (named visible components in [Hou98]). When there are l visible components in a circuit, ∆J can be decomposed into three matrices, as

k T ∆J = Pf D Q (5.25) · f · f k m l m l where Df is an l l diagonal matrix, Pf × and Qf × have only 1, -1 and 0’s as× elements and T denotes∈ the < transpose operation.∈ < From k this decomposition, Jf = J + ∆J is calculated using the relation given in k T equation 5.22, by setting B = J, U = Pf , S = Df and V = Qf . The details of the method used are given in [Hou98]. The use of Householder’s method in the mentioned work is based on the idea that for the faulty case, the Jacobian will remain more or less the same and few of its components will change because of a fault1. The gain in CPU time by following the above procedure instead of calculating and solving k the LU decomposition of Jf is maintained by the assumption that l m, where m is the size of the Jacobian matrix.  The cost functions of the CONCERT [Hou98] and standard methods have been derived in order to make a preliminary comparison as to under which conditions the CONCERT method is favorable. In tables 5.2 and 5.3, the number of arithmetic operations required for the CONCERT method of k 1 calculating (Jf )− (by means of LU decomposition) is displayed. The cal- culations are based on a Jacobian of size m m, of which l elements change because of the faulty behavior. The calculations× are given in table 5.2 with- out taking sparse-matrix methods into account. In table 5.3 calculations are based on sparse matrix methods. In the second set of calculations (ta- ble 5.3) it is assumed that p elements from each row/column are structural nonzeros. 1Note that this is not true for nonlinear circuits in general. For a transistor circuit, for instance, a bridging fault can change the operating points of many transistors, causing the Jacobian elements to change due to these devices to change. For soft faults this assumption is more often valid. 5.10. FAULT SIMULATION: OVERVIEW OF EXISTING METHODS 143

Method CONCERT LU decomposition 2 l3 2 l m3 2 m M/D (l + 2)m + m(l 1) + ( 3 ) + l ( 3 ) ( 3 ) + m ( 3 ) − 3 − 3 2− A/S (l + 2)m2 m + ( l ) + l2 ( 5l ) ( m ) + ( m ) ( 5m ) − 3 − 6 3 2 − 6 Comp Θ(lm2) Θ(m3)

Table 5.2: Comparison of operational complexity for full matrix methods. M/D denotes multiplications and divisions, A/S denotes additions and sub- stractions, and comp stands for the time complexity.

Method CONCERT LU decomposition 4l 1 p2m p2 M/D (l + 2)pm + ml m + ( 3 ) ( 3l ) ( 3 ) + pm ( 3m ) − − 2 − 2 A/S (l + 2)pm (l + 2)p + ml + ( 4l ) ( p m ) + pm + ( p ) − 3 3 6m (3/2) (1/6l) + l2 (p2/2) p − − − − Comp Θ(lpm) Θ(p2m)

Table 5.3: Comparison of operational complexity for sparse matrix meth- ods. M/D denotes multiplications and divisions, A/S denotes additions and substractions, and comp stands for the time complexity.

The expressions in table 5.3 imply that for a circuit with a very large 1 k− number of nodes, the CONCERT method of calculating the Jf is more favorable when (taking the highest-ordered terms of the polynomial) lpm < p2m/3, which in turn implies l < p/3. To put in words, using a CONCERT- like scheme for the solution of linear equations pays off when the number of elements that are changed in the whole Jacobian matrix are only one-third of the nonzero elements per row/column. To give an example, consider a circuit with 1000 nodes (thus a Jacobian with ∼ 1, 000, 000 elements). Nor- mally, not more than 5% of the entries of such a Jacobian will be structural nonzeros, thus p = 50. This means that l has to be smaller than 19, thus in the whole matrix of one million elements, only 19 may change for this method to be favorable. The figure 5.6 shows a comparison between the usage of Householder’s up- date method for the full and sparse matrices. This comparison is based 144 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS on long operations, i.e. multiplications and divisions, since these cost far more than additions and substractions. It has already been explained that the main assumption in CONCERT is that a small number of matrix el- ements change during the simulation. When this assumption holds, some gain in CPU power is possible. This assumption is often satisfied because the Jacobian matrix is sparse, i.e. has very few nonzero elements. Sparse matrix methods which are always used in commercial simulators also rely on this property of the system matrix in order to reduce the computation time. Those elements of the Jacobian which always remain zero due to circuit topology (i.e. the structural zeros) are automatically skipped in the calculations, which decreases the computation time substantially [Kund86]. Only the remaining elements (i.e. the structural nonzeros) are computed in simulation steps such as LU decomposition. This means, in return, that a difference arises when the CONCERT method is compared with standard LU decomposition for full matrices and sparse matrices. In the sparse case, the usage of CONCERT seems less favorable for similar sizes of circuits. When figures 5.6(a) and 5.6(b) are compared, it can be seen that the num- ber of long operations required for CONCERT method remains much lower than those required for LU decompositions when full matrix methods are taken as basis (note the difference in scale between the two figures). In contrast, the difference between the number of operations shown in figures 5.6(c) and 5.6(d) reveals that in the sparse case, standard LU decompo- sition is often more favorable than the CONCERT method. For most of the parameter values l and m, the complexity characteristics given in fig- ure 5.6(c) (with maximum point at 9 105 long operations) remains lower than that given in figure 5.6(d) (with× maximum point at 2.7 106 long operations). × In figures 5.7 and 5.8 the number of long operations with respect to l is given for a circuit with m = 1000. It can be seen that in figure 5.7 the operations due to CONCERT remain far below the LU case, whereas in figure 5.8 these operations exceed those of LU decomposition at around l = 15 (15 of the 1, 000, 000 matrix elements change). As a result, it can be concluded that methods similar to those used in CONCERT should not be applied to all fault types, but only to parametric faults or bridging faults with a large resistance which are known not to cause radical changes in the operation of the circuit. In a possible implementation of an industrial tool, checks for distinguishing between the fault types have to be included to be able to use this kind of methods. 5.10. FAULT SIMULATION: OVERVIEW OF EXISTING METHODS 145

(a) (b)

(c) (d)

Figure 5.6: Numer of long operations derived for (a) full matrix LU decom- position, (b) full matrix CONCERT method, (c) sparse matrix LU decom- position, and (d) sparse matrix CONCERT method 146 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

8 6 x 10 x 10 3.5 3 CONCERT LU CONCERT 3 LU 2.5

2.5 2

2 1.5 1.5 long operations long operations 1 1

0.5 0.5

0 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 visible components,l visible components,l

Figure 5.7: Derived variation of long operations for different values of l when m=1000 and full matrix methods are used

8 6 x 10 x 10 3.5 3 CONCERT LU CONCERT 3 LU 2.5

2.5 2

2 1.5 1.5 long operations long operations 1 1

0.5 0.5

0 0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50 visible components,l visible components,l

Figure 5.8: Derived variation of long operations for different values of l when m=1000 and sparse matrix methods are used

In [Hou98], speedups of between 2.5 and 125 are reported using the method described in the previous paragraphs. However, since also other methods (such as fault ordering) are used in CONCERT, it is difficult to say which 5.10. FAULT SIMULATION: OVERVIEW OF EXISTING METHODS 147 one is responsible for the improvement. It is however, observable in the results that the speedup in linear circuits is higher than that for nonlin- ear circuits, since the number of l becomes high especially in the case of a bridging/short/open fault in a nonlinear circuit. The same method is also used in [Tian98], however only DC simulation of nonlinear circuits is considered.

5.10.2.2 Reusing the Fault-Free LU Decomposition

Another method of making use of the similarity between the fault-free and faulty versions of equation 5.10 is reusing the L and Umatrices calculated for the fault-free circuit. A fault simulation tool using this method has been described in [Augu98]. In the LU decomposition, the elements of the matrix to be decomposed are treated in a sequence that starts from the upper left corner and ends at the lower right corner. In other words, when a change occurs in the matrix in the last row or column, the LU decomposition does not have to be carried out again from the beginning. Only the last row and column can be processed to obtain the last row of the L matrix and the last column of U matrix. The rest of these matrices remain the same as before the change. The method described in [Augu98] is based on using the explained property of LU decomposition for faulty elements. For linear simulation, when a cir- cuit element is faulty an extra variable is added to the vector of unknowns. For example, for a bridging fault this is the current that runs through the bridging resistance. In this way, the linear set of equation to be solved (i.e. equation 5.10) is augmented by one extra equation and one extra variable. The matrices L and U from the golden circuit can be used in this case as they are, by decomposing only the added equation so the time cost of decomposition is decreased. The tool described in [Augu98] is a simulator for the AC and DC simula- tion of linear and nonlinear circuits. No transient functionality is described. The usage of LU factorization in the manner described in this work is an interesting idea. It is sure to deliver CPU time gain for linear and weakly nonlinear circuits, however, the application for circuits with low level tran- sistor models where more Jacobian elements can change at each iteration, the same problems as the Householder’s method apply. In general, reusing 148 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS the fault-free LU decomposition requires that it is known beforehand which elements of the system matrix will change, which means that the part that remains the same can already be decomposed. Some more details on the reuse of LU decomposition will be discussed in chapter 6.

5.10.3 Other Methods

As explained in the previous sections, solving the linear sets of equations are only part of the computation when the whole simulation flow given in figure 5.4 is observed. An important part of the CPU time is spent on calculating and recalculating the Jacobian matrix and the total node currents at each NR iteration (see equation 5.10). Obviously, to decrease this CPU time would mean to decrease either the number of NR iterations or the number of evaluation operations per iteration.

5.10.3.1 Improving Initial Points

One way to decrease the number of NR iterations is to find an initial point which is nearer to the solution than the initial point which would normally be used. The calculation of an improved initial value depends on knowing beforehand which faulty responses will be close to each other and which not. This has been earlier implemented in several works by means of fault ordering schemes [Tian98], [Zwol97]. In these papers, the result of the previous time point is used as the initial point of the NR iterations if the (Euclidean) distance between the responses of two circuits in the previous time point is smaller than a predetermined value. One of the disadvan- tages of this method is that this predetermined distance value can depend on the size of the circuit, the value of the input signal at the given time point and the nonlinearity of the system components. Another problem is that, depending on the choice of this ‘distance threshold’, the number of NR iterations may even increase instead of decrease. This is because the prediction made based on the previous time point of the circuit to be simulated can be a better initial guess than the value calculated according to the previous faulty circuit response. Another problem with the ordering method is that the ordering of circuits with respect to the results after each time point will introduce extra steps 5.11. CONCLUSION 149 with a complexity of Nlog(N) where N is the number of faults [Bana91]. It may be possible to do this ordering at larger intervals, but when the intervals are too long having a wrong order is possible, which can increase the iterations taking away the whole advantage of the method.

5.10.3.2 Preventing Redundant Model Re-evaluations

A second method for limiting the operations within the NR iterations is to store the evaluation of circuit currents and the Jacobian matrix. When the change in the circuit voltages are minimal, it may not be necessary to evaluate these functions each time. Standard simulators check the volt- age change before evaluation, but it is possible to extend these checks to between the voltages of different faulty circuits. Even when some of the voltages remain the same in the circuit, unnecessary operations can be avoided by not reevaluating the corresponding device model equations. This method is described in [Zwol97]; however, no numerical results of CPU time improvements are presented.

5.11 Conclusion

In this chapter, the usage of fault simulation as a test signal evaluation technique and the efficiency of fault simulation methods have been dis- cussed. Fault simulation is the only realistic way of verifying that a test has sufficient coverage with respect to the possible manufacturing defects. The modeling of manufacturing defects forms the basis for the accuracy of fault coverage figures obtained by fault simulation, and is thus one of the crucial factors in the off-line evaluation of analog and mixed-signal tests. The main obstacle for the common usage of fault simulation for evaluating test programs is the high computational cost which is especially the case for the transient simulation of analog macros. The fact that the number of faults is also dependent on the macro size makes the complexity of fault simulation high. Thus the simulation times of even macros of medium size can become too long making fault simulation unpractical in an industrial environment. 150 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

A solution for decreasing the fault simulation times is developing circuit simulators which are optimized for the efficient simulation of faulty circuits. These simulators can make use of the latency which is inherent in the simulation process since the circuits which are simulated for this purpose are very similar in topology with few differences. Some of the methods which have already been suggested by several researchers for efficient fault simulation methods have been discussed. The complexity of the suggested algorithms have been compared in some cases. The general conclusions that can be drawn from the discussions in this chapter are as follows:

The existing methods are not efficient enough to be applicable to • transient simulation of nonlinear circuits. The use of Householder’s method as suggested in [Hou98] requires • that only a very small part of the Jacobian matrix changes for all injected faults. The presented analysis of this method shows that using a CONCERT-like scheme is favorable only when the number of elements that are changed in the whole Jacobian matrix are only one- third of the nonzero elements per row/column. This number will often be higher for the simulation of a nonlinear circuit with a structural fault injected. Hence, this method is not favorable for fault simulation of nonlinear circuits in general. Since the most optimal method to be used depends heavily on the • simulation type and the simulated circuit, a fault simulator should include procedures for checking for the various cases where specific simulation methods can be applied. Methods such as from [Hou98] or [Augu98] can be applied within a simulator which checks for the conditions in which these methods are favorable. The CPU time spent on the formulation of equations (i.e. model • evaluation) depends on the complexity of the device models involved. However, since in some cases simulation can require very fine device models, using higher level device models can not be offered as a gen- eral improvement method. Improving the initial points for NR iterations requires very accurate • calculation of the new initial point and the calculation method should 5.12. BIBLIOGRAPHY 151

not cost too many operations. Ordering at each time point is com- putationally too expensive to deliver any gain in efficiency.

Specifically, the conclusions presented lead to a basis view about which methods can be used in an efficient fault simulator to make it applicable to a large class of nonlinear circuits without limitations on weak nonlinearity or specific fault models. The main problem is to create a general simulation method that is makes it possible to use a number of the described methods for nonlinear circuits in general. From the comparisons presented in this chapter,

the improvement of initial points • the reuse of LU decomposition • are favorable for a general solution. Hence, these methods will be used in a prototype simulator which will be described in the next chapter.

5.12 Bibliography

[Augu98] J. Soares Augusto and C. F. Beltran Almeida, “Fast Fault Sim- ulation in Linear and Nonlinear Circuits with Fault Rubber Stamps,” in Proc. of the 4th IEEE International Mixed-Signal Test Workshop, The Hague, Netherlands, June 1998, pp. 28-37. [Bana91] L. Banachowski, A. Kreczmar and W. Rytter, Analysis of Al- gorithms and Data Structures, Addison-Wesley Publishing Com- pany, 1991. [Beur99] R.H. Beurze, Y. Xing, R. van Kleef, R.J.W.T. Tangelder and N. Engin, “Practical Implementation of Defect-Oriented Testing for a Mixed-Signal Class-D Amplifier,” in Proc. of European Test Workshop, Constance, Germany, May 1999, pp. 28-33. [Brul91] E. Bruls, F. Camerik, H. Kretschman and J. Jess, “A Generic Method to Develop a DefectMonitoring System for IC Pro- cesses,” in Proc. of International Test Conference, 1991, pp. 218- 228. 152 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

[Call88] W.J. McCalla, Fundamentals of Computer-Aided Circuit Simu- lation, Kluwer Academic Publishers, 1988.

[Chua75] L.O Chua and P.-M. Lin, Computer-Aided Analysis of Electronic Circuits, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1975.

[Csiz98] N. Csizmadia, private communication, Philips Semiconductors, Nijmegen, The Netherlands, January 1998.

[Engi99a] N. Engin, Improving the Test Time and Product Quality with DOTSS, Philips report no: RNB-C/47/99I-023, January 1999.

[Gyve91] J. Pineda de Gyvez, IC Defect-Sensitivity: Theory and Compu- tational Models for Yield Prediction, Ph.D. Thesis, Eindhoven, The Netherlands, April 1991.

[Harv94] R.J.A. Harvey, A.M.D. Richardson, E.M.J.G. Bruls and K. Baker, “AnalogueFault Simulation Based on Layout Dependent Fault Models,” in Proc. of International Test Conference, 1994, pp. 641-649.

[Hou98] J. Hou and A. Chatterjee, “CONCERT: A Concurrent Fault Simulator for Analog Circuits,” in Proc. of the 4th IEEE Inter- national Mixed-Signal Test Workshop, The Hague, Netherlands, June 1998, pp. 3-8.

[Hous64] A. S. Householder, The Theory of Matrices in Numerical Anal- ysis, New York: Blaisdell, 1964.

[Kund86] K.S. Kundert, “Sparse Matrix Techniques” in Circuit Analysis, Simulation and Design, A.E. Ruehli (Editor), 1986, pp 281-324.

[Maly86] W. Maly, A.J. Strojwas and S.W. Director, “VLSI Yield Pre- diction and Estimation: A Unified Framework,” in IEEE Trans- action on Computer-Aided Design, vol. CAD-5, no. 1, January 1986, pp. 114-130.

[Moor87] D. Moore and H. Walker, Yield Simulation for Integrated Cir- cuits, Kluwer Academic Publishers, 1987. 5.12. BIBLIOGRAPHY 153

[Nage75] L. Nagel, SPICE2: A Computer Program to Simulate Semicon- ductor Circuits, Memorandum No. ERL-M520, Electronics Re- search Laboratory, College of Engineering, University of Califor- nia, Berkeley, 1975. [Ogro94] J. Ogrodzki, Circuit Simulation Methods and Algorithms, CRC Press, 1994. [Pate98] J.H. Patel, “Stuck-at Fault: A Fault Model for the Next Mil- lenium,” in Proc. of International Test Conference, 1998, pp. 1166. [Pol 96] J.A. van der Pol, F.G. Kuper, E.R. Ooms, “Relation Between Yield and Reliability of Integrated Circuits and Application Fail- ure Assessment and Reduction in the One Digit Fit and PPM Reliability Era,” in Microelectonics Reliability, vol. 36, no. 11/12, pp. 1603-1610, 1996. [Sach98] M. Sachdev, Defect Oriented Testing for CMOS Analog and Dig- ital Circuits, Kluwer Academic Publishers, 1998. [Sale94] R. Saleh, S.-J. Jou and A.R. Newton, Mixed-Mode Simulation and Analog Multilevel Simulation, Kluwer Academic Publishers, 1994. [Sebe95] C. Sebeke, J. P. Teixeira and M. J. Ohletz, “Automatic Fault Extraction and Simulation of Layout Realistic Faults for Analog Circuits,” in Proc. of the European Design and Test Conference, Paris, France, March 1995, pp. 464-468. [Sing86] K. Singhal and J. Vlach, “Formulation of Circuit Equations,” in Circuit Analysis, Simulation and Design, A.E. Ruehli (editor), Elsevier Science Publishers B.V. (North-Holland), 1986. [Stap83] C.H. Stapper, Jr., “Modeling of Integrated Circuit Defect Sensi- tivities,” in IBM Journal of Research and Development, Vol: 27, No:6, November 1983, pp. 549-557. [Stap95] C.H. Stapper and R.J. Rosner, “Integrated Circuit Yield Man- agement and Yield Analysis: Development and Implementa- tion,” in IEEE Transactions on Semiconductor Manufacturing, Vol:8, No:2, May 1995, pp. 95-102. 154 CHAPTER 5. DEFECT-ORIENTED TEST EVALUATION FOR ANALOG BLOCKS

[Stra99] B. Straube, K. Reinschke, W. Vermeiren, K. Robenack, B. Muller, C. Clauss, “On the Fault-Injection-Caused Increase of the DAE-Index in Analogue Fault Simulation,” in Proc. of Eu- ropean Test Workshop, Constance, Germany, May 1999, pp. 118- 122. [Tian98] M.W. Tian and C.-J.R. Shi, “Efficient DC Fault Simulation of Nonlinear Analog Circuits,” in Proc. of DATE-Conference, Paris, France, February 1998, pp. 899-904. [Vari96] P. N. Variyam and A. Chatterjee, “Fast Fault Simulation of Ana- log Systems Using Polynomial Waveform Representations,” in Proc. of 2nd International Mixed-Signal Test Workshop, Quebec City, Canada, May 1996, pp. 11-16. [Voor97b] R. Voorakaranam, A. Gomes, S. Cherubal and A. Chatterjee, “Hierarchical Fault Simulation of Feedback Embedded Analog Circuits with Approximately Linear to Quadratic Speedup,” in Proc. of the 3rd IEEE International Mixed-Signal Test Work- shop, Seattle, USA, June 1997, pp. 48-59. [Wood50] M. Woodbury, Inverting Modified Matrices, Memorandum Re- port 42, Statistical Research Group, University of Princeton, 1950. [Xing98] Y. Xing, “Defect-Oriented Testing of Mixed-Signal IC’s: Some Industrial Experiences,” in Proc. of International Test Confer- ence, October 1998, pp. 678-687. [Yang99] Z. R. Yang and M. Zwolinski, “Fast, Robust DC and Transient Fault Simulation for Nonlinear Analogue Circuits,” in Proc. of DATE-Conference, Munich, Germany, March 1999, pp. 244-248. [Zano96] T. Zanon, CODEF 1.1: Contamination-Defect-Fault-Simulation for 0.5 µm CMOS Technology, M.Sc. Thesis, University of M¨unchen, Germany, November 1996. [Zwol97] M. Zwolinski, A. D. Brown and C. D. Chalk, “Concurrent Ana- logue Fault Simulation,” in Proc. of the 3rd IEEE International Mixed-Signal Test Workshop, Seattle, USA, June 1997, pp. 42- 47. Chapter 6

A New Approach Towards Analog Fault Simulation

6.1 Introduction

Analog fault simulation has been recognized in the recent years as an impor- tant tool for evaluating the test effectiveness and addressing yield problems in mixed-signal integrated circuits. The problem of how to carry out the simulation of analog faults more efficiently, however, has yet to be solved in order to make this category of tools useful in an industrial design and production teams [Forc99], [Xing98].

In the past, tools have been described which use existing circuit simulators by adding controlling software in order to make fault list, inject faults and analyze the results for fault coverage such as described in [Sebe95]. However, the limitations in the efficiency of this approach require a closer look at fault simulation. A summary of existing methods for decreasing the fault simulation computation time have been discussed in chapter 5. In this chapter, a new method for using the latency in the transient simulation of bridging faults in nonlinear circuits is presented and results are given.

155 156 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION

6.2 Simulator Requirements

In the previous chapter, several observations on the existing analog fault simulation methods have been made. Based on these observations, the requirements for a new fault simulator will be determined and subsequently dealt with in this section. When the existing methods were being discussed in chapter 5, it has been pointed out that latency exists in several operations within analog fault simulation when a SPICE-like simulator is used. Each of these operations can be carried out with various methods which can help decrease the overall latency and thus the computation time. Hence, a new fault simulation method should be adaptable to combine with other methods. One of the fundamental issues in using fault simulation to evaluate tests is the question of which fault models to extract and simulate. As explained in chapter 5, fault extraction tools can be used to model manufacturing defects at transistor level. In principle, any fault, structural or parametric, can be simulated with a standard circuit simulator provided that the fault effect does not cause the circuit equations to become ill-conditioned or numeri- cally unstable [Stra99]. The introduction of special methods for optimizing for fault simulation efficiency, however, sometimes brings limitations to the types of faults that can be introduced. The fault simulators developed for linear circuit operation [Nagi93], [Vari96] are only applicable to parametric faults because they rely on the assumption that the fault effect does not change the linear operation region of the circuit. Some of the simulators which can handle nonlinear circuits inject parametric faults only at linear circuit components [Tian98], making a parametric fault in a transistor im- possible to simulate. In general, the simulation of both parametric and structural faults are important for test evaluation. Parametric faults are in general more difficult to detect than structural faults; hence when they are considered equally probable they seem to be more important candidates for fault simulation. On the other hand, it has been shown in various studies that spot defects (which cause structural faults such as bridges, opens and shorts) occur more often than parametric faults. As an example, the defect analysis done on Pentium MMX IC manufactured in a 0.35 µm CMOS pro- cess shows that about 60% of all the defects found on a group of arbitrary defective IC’s are spot defects causing opens, shorts and bridges [Need98]. Another study carried out on IC’s from various CMOS, BICMOS and Bipo- 6.3. DC-BIAS GROUPING 157 lar processes from the Philips manufacturing plants shows the same defect types to be dominant among the IC failures the causes of which could be identified [Pol 96]. In [Milo98], a comparative investigation is presented with respect to the use of parametric or structural faults for evaluating tests. Comparing studies carried out on the structural fault coverage of test sets with high paramet- rical fault coverage and the parametrical fault coverage of test sets with high structural fault coverage, it has been concluded that test sets should be evaluated both in terms of parametric and structural faults. Although a test set with high parametric fault coverage is in general more likely to detect a large number of structural faults, this can not always be guaran- teed. Finally, a fault simulator must be able to tackle nonlinear circuits. Most of the mixed-signal IC’s are manufactured in CMOS processes, and hence a circuit with MOS transistors, and other elements such as resistors and capacitors will be standard for analog fault simulation.

6.3 DC-Bias Grouping

The main problem in nonlinear fault simulation is the fact that the presence of catastrophic faults (such as resistive bridges) often changes the operat- ing regions of the nonlinear elements in the circuit. Because SPICE-like simulators depend on the linearization of these elements, the usage of la- tency to decrease simulation time is only possible between topologies whose linearization yield similar parameters. The DC-bias grouping method that will be presented now is based on the grouping of those faulty circuits for which the same set of linear equations can be used with only small changes in parameters in order to prevent the repetition of the same linear solution steps. In the case of DC simulation of a linear circuit, the set of system equations

A x = u (6.1) · describe the system behavior, where A denotes the N-by-N system matrix, x is the vector of unknowns and u is the input vector. If a resistive bridging 158 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION

fault is inserted between nodes n1 and n2 in this system, then the matrix A will change to the system matrix of the faulty circuit, Af :

Af = A + ∆A (6.2)

Here, ∆A has all elements zero, apart from ∆A , ∆A , ∆A n1 n1 n1 n2 n2 n2 and ∆A which have the value G , where G is the conductance of n2 n1 f f the bridging fault inserted. This pattern± in the matrix due to the faulty element will be called the fault stamp of the bridging fault, following the definition given in [Augu98].

0 ...... 0   ......    ... 0 ..... 0 ...     .. 0 Gf 0 ... 0 Gf 0 ..   −   ... 0 ..... 0 ...     ......      ∆A =  ......  (6.3)    ......     ... 0 ..... 0 ...       .. 0 Gf 0 ... 0 Gf 0 ..   −   ... 0 ..... 0 ...     ......     0 ...... 0 

In this case, it will be possible to calculate the response of the faulty system by making use of the fault-free circuit response, since

(A + ∆A) (x + ∆x) = u (6.4) · where x + ∆x represents the faulty response. The faulty response can now be calculated from fault-free response because

1 ∆x = (A + ∆A)− ∆A x (6.5) − · · 1 which can be solved by modifying the already calculated A− either by using Householder’s method [Tian98] or LU modification [Augu98], and 6.3. DC-BIAS GROUPING 159 making use of the sparseness of ∆A for calculating ∆A x in an efficient manner. · In the case of a nonlinear circuit, however, it is impossible to apply any linear method readily. The simulation of nonlinear circuits includes solving linear equations of the form

J(x) xstep = F(x) (6.6) · − at each Newton-Raphson iteration step, where J is the Jacobian of the previous step, F is the MNA (Modified Nodal Analysis) formulation of circuit equations and xstep is the step that will be taken towards the solution ˜x, such that F(˜x) 0 within the accuracy requirements of the simulation. The NR iterations' are repeated for each time step, yielding the vector sequence ˜x(t) for t = t0, t1, ..., tT as the simulation output. { } Observing the suitability of equation 6.5 for dealing with fault simulations, the challenge is to find a manner of applying the same concept to nonlinear circuits. The first idea that comes to mind is the usage of piecewise linear (PWL) simulation techniques, since they are applicable to nonlinear circuits and allow the usage of linear methods. However, as the use of standard SPICE-like simulators is more common than PWL simulators, instead, use of techniques similar to PWL in SPICE-like simulators will be adopted here. In the following an informal discussion of PWL simulation techniques will be presented for introducing a new fault simulation technique. This discussion is not meant as a formal introduction of PWL methods. The aim is to make an intuitive introduction to the fault simulation method that will be described, inspired from PWL simulation methodology. The reader is referred to [Bokh87] for a more formal description of PWL analysis methods or to [Stip90] and [Buur93] for the implementation of a PWL simulator. For applying a PWL simulation to a nonlinear circuit, each of the nonlinear elements have to be represented by PWL models. As a result, each non- linear device can be replaced by a linear device of specific value at a given operating region. Expressed in terms of the matrix equations, this divides the solution space into regions in which the circuit can be treated as linear. This can be explained with the simple example circuit in figure 6.1. In this case, the circuit contains three linear resistors and one nonlinear element (shown in figure 6.2(a)). The i v characteristics of this element is given in figure 6.2(b). The characteristics− has one break point, meaning that if 160 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION

the voltage v2 is larger than the break point voltage Vbr then the nonlinear element can be replaced by a linear resistance of value Rnl,1 and a voltage source of value Vs1 in series. If v2 becomes smaller than Vbr then Rnl can be replaced by a linear resistance of value Rnl,2 and a voltage source of value

Vs2 . The solution space of the example circuit is two-dimensional (there are two nodes, with voltages v1 and v2), and the presence of Rnl divides this space into two regions as shown in figure 6.3. These regions are called polytopes in PWL terminology.

R1 R3 v1 v2

Vinp ± R2 Rnl

Figure 6.1: Example circuit with a piecewise linear component

i

+ i Rnl,1

v Vbr 0 v

- Rnl,2 Vs1 Vs2

(a) (b)

Figure 6.2: Example of a piecewise linear element (a), and its i v char- acteristics (b) − 6.3. DC-BIAS GROUPING 161

v1

Vbr

0 v2

Figure 6.3: Division of the solution space of the example circuit in two polytopes resulting from the nonlinear element Rnl

The example given above can be extended for a circuit with a larger number of nonlinear elements and to nonlinear elements modeled in a larger number of PWL branches. The general result is that the solution space is divided into polytopes dictated by the modeling of nonlinear components. For k nonlinear elements and l branches per element, a maximum of k(l 1) polytopes can exist in the total solution space. −

If we turn to transient simulation, it is possible to point out some favorable properties of piecewise linear simulation techniques for saving fault simu- lation time. First of all, the element equations do not have to be evaluated at each time step. Rather, the circuit is treated as linear between the time points where it passes the boundaries of a polytope. Secondly, the repeated LU decompositions are not necessary, since the crossing of a polytope bor- der changes only few elements in the system matrix. For this reason, in piecewise linear simulation the L and U matrices are often updated rather than fully recalculated. The rank one update [Stip90] used for this pur- pose is similar to Householder’s method that has been described in chapter 5. Only when the change of step size requires the reevaluation of a large number of elements, a full LU decomposition has to be performed [Stip90].

If the fault simulation of nonlinear elements is considered in the light of the 162 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION facts given in the previous paragraphs, it becomes attractive to use the local linearity property of the piecewise linear models. This property can be used with the efficient fault simulation methods most of which are applicable to linear systems. Hence, it is possible to group the faulty circuits which end up in the same polytope after the DC simulation and treat them as a set of faulty circuits obtained from the same linear circuit by injecting faults. An assumption made at this point is that the group of faulty circuits which start from the same polytope will be found together in same polytopes for the rest of the simulation period. When this assumption does not hold, checking the validity of the grouping and regrouping have to be performed. It has been observed in the implementation of the fault simulator based on this method that in most cases the responses due to faulty circuit groups obtained in this way remain close to each other for the rest or a large part of the simulation. For maintaining time efficiency of the simulator, checking of the validity of grouping can be introduced at intervals or at points where the input changes dramatically. More on this topic will be discussed in section 6.8. The grouping of faulty circuits for efficient fault simulation purposes as described above will be called DC-bias grouping. By means of this new method, it is possible to treat a set of nonlinear faulty circuits as linear when it is justified that the Jacobian matrices are alike apart from the contribution of the faulty element (see equation 6.3). The fault simulation method that will be described in this chapter takes advantage of this prop- erty by using an equation similar to 6.5 to estimate the response of a faulty circuit when another faulty circuit (or the fault-free circuit) belonging to the same fault group has been calculated. In each fault set, some faults can be found which yield faulty circuits with a DC operating point very far from the operating points of all other faults. These faults are set apart and simulated in the standard (SPICE-)way. The circuits which are included in the same group are simulated in parallel. Each group has one representative fault whose DC point is close to most of the fault in the group. The data from the simulation of this fault is used for simplifying the simulation of the other (so-called dependent) faults in the same group. 6.4. ONE-STEP RELAXATION METHOD 163

6.4 One-Step Relaxation Method

The motivation and method of DC-bias grouping has been described in the previous section. When the parallel simulation of one group is considered, an approach similar to the one described in section 6.3 can be used. For nonlinear circuits, the linear equation 6.5 can not be used to get the exact result. However, at each time point, once the response of the representative circuit has been calculated, the initial guesses of the other circuits can be derived from it by using a similar linear approximation which is called one-step relaxation [Tian98]. First, it must be explained how the faults in a group are treated with respect to each other. In the selection process, one fault per group is assigned to be representative, and the other faults to be dependent. The NR iterations of the faulty circuit due to the representative fault are first performed at each time point. Unlike the faulty circuits due to dependent faults, the initial guess of the circuit with the representative fault is taken as the earlier time point solution of the same faulty circuit. The Jacobian Jr and the current vector Fr are stored to be used in the simulation of the dependent circuits. The Jr and Fr of the representative circuit are the only data that the program has to save in order to perform less operations in the simulations of dependent faults. A graphical explanation of one-step relaxation is given in figure 6.4. Here a one-dimensional system is assumed for simplicity. The vertical axis rep- resents the MNA expression, F , i.e. the function that has to be smaller in absolute value than the current accuracy at the end of the NR iterations (MNA equation: F (x) = 0). The horizontal axis represents the unknown parameter x. As can be seen in the figure, it is assumed that the two exact solutions xdep∗ and xr∗ lie sufficiently close to each other and both the repre- sentative MNA expression Fr and the dependent MNA expression Fdep are weakly nonlinear in the vicinity of the solutions. In general, when these conditions are satisfied, the difference ∆x = xdep xr between the solutions of the dependent and representative circuits can− be estimated as [Tian98]

1 ∆x J− (xr) Fdep(xr) (6.7) ≈ − dep · where Jdep is the Jacobian of the dependent circuit. Because the operating regions of the transistors are similar and the exact solutions xdep∗ and xr∗ are 164 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION

close to each other, the Jacobian matrices Jr and Jdep will have equal or very close component values apart from the fault stamp of the respective faulty elements that the two circuits contain. For this reason, in most cases, Jdep can be obtained by adding the four nonzero elements as given in (0) equation 6.3 to Jr. The initial guess for xdep (denoted as xdep in figure 6.4) calculated using the ∆x obtained from equation 6.7 is used as the initial guess for the NR iterations of the dependent circuit. The aim of DC-bias grouping is to be able to have such good initial guesses for each dependent circuit that a substantial amount of equation building and solving time can be saved by decreasing the number of NR iterations necessary for convergence.

F (x)

Fdep

Fr

Xdep

(0) x xdep xr

Jdep(xr)

Figure 6.4: One-dimensional demonstration of the one-step relaxation method 6.5. GROUPING METHODS 165

6.5 Grouping Methods

The grouping of faulty circuits that have similar operating points can be done in various manners. Initially, a fuzzy grouping called fuzzy c-means clustering [Seli92] seemed suitable for this task, since such a grouping scheme can take into account the size of the groups and the distance of operating points from each other inside a group in a flexible manner. It is, for instance, not desired to have a group of 30 faulty circuits of which the operating points are not extremely close to each other. It is preferred to have e.g. two groups of 15 with much closer clustering of operating points. On the other hand, when a very small distance between the DC points is desired there can be too many groups which causes the advantage of the method to decrease. Thus, a good balance between the DC-point distances between faulty circuits in a group and the number of groups is necessary for maximizing the time gain from DC-bias grouping. The fuzzy c-means clustering has been performed by using an iterative al- gorithm from the fuzzy toolbox of MATLAB. Since it becomes too complex to use the DC value of each node as input to this algorithm, a prespeci- fied number of nodes with the highest sensitivity to faults (the ones whose values in the various faulty circuits show most variation) and least correla- tion with each other are selected. The DC voltage values corresponding to these nodes are then grouped by using the MATLAB fuzzy c-means script. This scheme provided a good number of groups and the DC values were in most cases sufficiently close to each other within a group. However, the resulting groups led to bad initial points for some circuits with dependent faults. The reason for this can be explained as follows: the grouping is done in order to treat the nonlinear elements in the circuit as piecewise linear elements in the beginning of each time step to have a good initial guess for the dependent circuits. For this reason, it is required that the faulty circuits in a group are made of transistors that operate in the neigh- borhood of each other to have to satisfy the linearity property. The fuzzy grouping, on the other hand, does not consider transistor operating regions but purely the voltage values from the initial DC simulation. This results in some cases in circuits in the same group with very different transistor region combinations even if the DC results lie close enough to be put in the same group. Another disadvantage of fuzzy c-means grouping is that the iterative algorithm used for this script can lead to excessive computation 166 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION time for grouping in the case of larger circuits. The disadvantages of fuzzy c-means grouping led to using a direct grouping scheme based on operating regions of nonlinear elements. In this scheme, the operating regions of transistors (and other nonlinear elements that are present) are listed for each fault, and those faults having all transistors in the same group are grouped together. Each time the device equations are evaluated, each transistor gets also a region attribute, such as ‘cutoff’, ‘linear’ and ‘saturation’. This requires no extra simulation, only the storage of the region attribute is required. This is because the attribute value is a by-product of the device evaluations. After the DC simulation step, each faulty circuit (and the fault-free circuit) has an attribute sequence, which is the states of all transistors in the circuit combined. For example, an attribute word for a three-transistor circuit could look like ‘SSL’, meaning that the first two transistors are in the saturation region and the last one is in the linear region at t = 0 (the DC simulation step). The attribute sequences are then ordered and groups are formed from those faults which have the same attribute words. In the case of other nonlinear devices such as diodes, the procedure is similar: in general, for each nonlinear device in the circuit, one region attribute has to be included in the attribute sequence. Transistor region grouping has turned out to be more accurate, and it costs less CPU time than the fuzzy c-means clustering. The reason for this is that determining transistor regions already occurs in the evaluation of transistor equations for standard simulation steps. The only extra computation that has to be performed is the ordering of the attribute regions.

6.6 Partial LU Update

In section 6.4, the inverse of the matrix Jdep is used for calculating ∆x, from which the initial guess xdep can be obtained. LU decomposition is the standard way to calculate ∆x in this case, direct matrix inversion is not used since it is computationally much more expensive. As explained in chapter 5, LU decomposition is also used in the NR iteration steps of the standard SPICE flow, which means that the decomposition of Jr is already obtained when the solution xr(t) is calculated. The commonly-used LU decomposition algorithm involves a submatrix approach in which the 6.6. PARTIAL LU UPDATE 167

U11 U12 U13 . . . U1N L U U . . . U . 21 .22 23. . . . 2N ......

Lm1 Lm2 Lm3 ... Umm ... UmN L(m+1)1 . . . L(m+1)m L . . . L . (m+2)1 . . . (m+2)m . A(m) . . . . . LN1 . . LNm

(a) (b)

Figure 6.5: (a) Order of LU decomposition. (b) The structure of the inter- mediate matrix matrix elements are processed starting from upper left corner and ending at the lower right corner. This ordering is demonstrated in Figure 6.5(a) where the locations of elements to be processed first are represented by the darkest color. In other words: once Jr is obtained and stored, it is possible to use it by modifying the part changed because of the resistive bridging fault of each faulty circuit, when the matrix elements to be changed are not located in the upper left part of the matrix In order to maintain this, after step m of the LU decomposition for a square matrix A of dimension N (where N > m), the results can be stored in one matrix. This matrix then includes the elements of L and U for rows and columns 1 to m and the submatrix of A(m) (of size N m), which is the updated form of the corresponding submatrix of A to be− used for calculating the LU coefficients at step m + 1 (see Figure 6.5(b)). If A is a Jacobian, and each bridged node of the circuit corresponds to a column f of A such that f > m, then this intermediate matrix can be used for evaluating ∆x for each dependent circuit. In [Augu98], a similar technique is used for parametric faults in linear elements, although the method is also applicable to some other fault types and nonlinear elements. This method is based on adding one variable to the matrix of unknowns x, so that the added fault is always represented in the last equation. For example, introduction of a bridging fault between two circuit nodes requires adding the current if through the bridging resistance Rf as new variable to the system of equations. In this way, the initial 168 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION system of equations is augmented with the equation

vj vj if Rf = 0 (6.8) − 0 − where j and j are the nodes between which the fault is introduced. Be- cause the change0 in the system matrix is only in the last row, only the last rows of L and U matrices have to be updated. The usage of this update in a fault simulator for the DC simulation of parametric faults in linear elements and structural faults that can be represented as linear elements is described in [Augu98]. A rough analysis of the method shows that for linear circuits with a large fault list, the method is √n times faster than a standard SPICE simulation where n is the number of nodes of the circuit [Augu98]. Prac- tical results obtained from the implemented simulator FARUBS [Augu98] show speedups which are higher than this expected value. Up to 160 times speedup in simulation time with respect to standard SPICE simulations have been reported. Although based on an interesting idea, the methodology used in FARUBS has a number of drawbacks: First, a general limitation of FARUBS is the introduction of faults in nonlinear elements. Since these faults can not always be described as an added element, it may be necessary to increase the size of the system matrix by more than one row, or change significant parts of the system matrix itself. In both cases, the large computational gain for linear faults will not be possible to obtain. Another limitation in the case of nonlinear circuits is that all non-faulty nonlinear elements are assumed to remain in the same operating region after a fault is introduced. This can not be guaranteed in general, especially when hard structural faults are considered. When transient simulation is concerned, LU decomposition has to be ap- plied repeatedly to the Jacobian matrix for each NR iteration at each time point during the simulation of each fault. For decreasing the simulation costs due to these calculations, the usage of a model similar to the one summarized above has been considered. The second disadvantage mentioned above can be circumvented by means of grouping, since in this case there should not be very large differences between the Jacobian matrices of faults in the same group. Another limi- tation of this method for parallel simulations is the risk of having Jacobian 6.6. PARTIAL LU UPDATE 169 matrices of different sizes for different sorts of faults. This presents prac- tical problems for parallel simulations. For this reason, here a more direct method will be used for LU update. Our partial LU update methodology involves the checking the faulty nodes and the pivoting while the representative circuit is being simulated. In this way, the part of the Jacobian that must remain constant during the parallel simulation of the faults in the same group is identified. In general, let M = m1, m2, ...mk be the set of nodes to which the faults in a fault group are{ connected and} m be that element of M which corresponds to the node whose voltage occurs first in the vector of unknowns (x). In this case, m determines which part of the Jacobian will not change, since all the elements due to faults will be introduced at nodes that occur after m in x. With m identified, the intermediate LU matrix LUint of Jr can be calculated. The resulting structure is similar to the one shown in figure 6.5(b), with A replaced by Jr. This intermediate matrix is stored during the simulation of all the faults in the fault group for the specific time point. During the simulation of a dependent fault, the intermediate matrix that has been stored during the simulation of the representative fault can be updated as follows. The intermediate matrix will look like

U U U ...... U  11 12 13 1N  L21 L22 U23 ...... U2N  L L U ......   31 32 33   ......     : : : ::: : : ::: :  LU int=    (m) (m) (m)   ...... Jr(m+1)(m+1) Jr(m+1)(m+2) ... Jr(m+1)N     ...... J (m) . ... .   r(m+2)(m+1)     ...... : : ::: :   (m)   LN1 LN2 ...... JrNN  (6.9) The updating of this matrix can be done by first adding the faulty ele- ments due to the dependent fault. By the choice of m as explained above, these elements will correspond to locations at the lower right corner, in the (m) submatrix Jr . The elements of this submatrix under LU decomposition have the property that the addition of faulty elements yield directly the submatrix as it would have been if the faulty Jacobian was decomposed 170 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION from scratch. After this addition, the remaining steps of LU decomposition will be performed.

6.7 Parallel Simulation

The grouping of the faults and the usage of data from the representative fault have been explained in the earlier sections. In this section the total simulation flow will be presented based on these ideas. A flowchart of our fault simulation tool is given in figure 6.6. After the initial DC simulation step (box 1 in figure 6.6), DC-bias grouping is per- formed based on transistor operating regions (box 2 in figure 6.6). The transient simulation of ungrouped faults (box 3 in figure 6.6) is first carried out in a sequential manner (i.e. fault after fault). The simulation of the grouped faults is carried out in parallel at each time point. For each group, first the circuit with the representative fault is sim- ulated (box 4 in figure 6.6). During this simulation, the LU decomposition that is to be reused is stored. When the solution is reached for the represen- tative fault, the next time step for all the circuits in the group is computed and stored (box 5 in figure 6.6). After this, for the same time point, the circuits with dependent faults are simulated (box 6 in figure 6.6). For each of these circuits, first the improved initial point is computed based on the result of the representative circuit. After this, the NR iterations are carried out until convergence is obtained. The same procedure is repeated for each fault group. When all groups are done, the same loop is repeated for the next time point. Note that the step choice can vary from group to group, so the value of the time point reached does not have to be equal for faults from two separate groups.

6.8 Implementation and Results

A prototype simulator [Engi00] has been implemented in order to evaluate the possible simulation time decrease of the method described in the previ- ous sections. The implementation has been realized in MATLAB and then converted to C++ code. The simulation tool works with the SPICE level 6.8. IMPLEMENTATION AND RESULTS 171

BEGIN

Read netlist and fault list

DC simulation of all faults 1

Grouping of faults 2

Transient simulation of 3 ungrouped faults

Take the first fault group G and the representative 4 fault Faultr of G

Simulate Faultr at the present time point tp

Select next time point 5 tp. ¬ next time point Take next group G and the representative fault Fault of G For each dependent fault r

Faultdep in G, do: •Calculate initial guess for tp •NR iterations 6 N

All Y All N time points groups done? done?

Y END

Figure 6.6: The flowchart of the implemented simulator

3 model and the step size is chosen dynamically based on the estimated local truncation error. Sparse matrix methods have not been used in this 172 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION implementation at this stage since the aim has been to verify the usability and CPU gain from the described method on circuits of moderate size. The injection of faults is based on resistive bridging faults, although additions for capacitive bridging faults and parametric faults of transistors, resistors and capacitors can also be included in this method. The grouping scheme can also be easily extended to include other nonlinear elements such as diodes. As such the simulator is flexible in terms of circuit elements and fault models. Before presenting the results, an important aspect of the CPU time com- parisons has to be stressed. The simulation tool based on the described fault simulation model uses routines (e.g. for model evaluations, NR iter- ations, time step choice) implemented based on the SPICE functionality. Although these routines are based on standard methods, they are simple and possibly differ in efficiency from their commercial counterparts. Hence, for the sake of a fair comparison, the developed tool has not been compared to another SPICE implementation. Instead, a SPICE-like simulation tool using the same routines has also been implemented and this tool has been used as the conventional simulation tool for the comparisons. As a result, the differences in CPU time can not be caused by variations in the under- lying algorithms or differences in simulation parameters such as accuracy, transistor model level, choice of integration method, etc., since these are completely same for the two simulations. Further, the two tools have both been implemented in MATLAB and are run in the same simulation envi- ronment. To summarize, any difference in CPU time for the two modes of simulation arises completely from the application of our fault simula- tion method. In the comparisons to follow, the standard SPICE simulation will be referred to as sequential simulation and our simulation method as parallel simulation. The implemented program has been evaluated using two circuits. The first circuit, an operational amplifier, is adapted from one of the mixed-signal benchmark circuits presented in [Kami97]. It has a unity gain bandwidth of 28 MHz and DC gain of 91 dB. The transistor-level schematic of the circuit is shown in figure 6.7. The circuit is simulated as buffer. A fault list of 29 faults has been simulated, so the total number of simulated circuits was 30 including the fault-free circuit. The fault list consisted of 5 different bridging faults simulated at various resistances between 100 Ω and 5 kΩ. Figure 6.8 shows the groups calculated for this circuit after the initial DC 6.8. IMPLEMENTATION AND RESULTS 173 simulation. The last group seen in this figure consists of ungroupable faults which are simulated in the standard (SPICE) way. In table 6.1, the total CPU usage for the transient simulation is given when the sequential and parallel methods are used for this example circuit. Simulations presented in this table have been performed within MATLAB environment on a Pentium II 450MHz PC, and the comparison in CPU time is made in terms of the millions of floating point operations (Mflops) used in the simulation.

VDD

i8 118k M5 30/4

i1 M8 i5 200/4 net54 net54

i3 i6 VO

M3 net44 M6 VN 30/4 30/4 VP i10 net32 150k net48 1fF 2k i2 net35 154.2/3 15/4 15/4 M9 M4 M7 i7 i4 i9

VSS Figure 6.7: Operational amplifier benchmark

12

10

8

6 # of elements

4

2

0 1 2 3 4 5 6 Group number Figure 6.8: Fault groups for the operational amplifier 174 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION

Circuit name Operational amplifier # of linear 4 elements nonlinear 7 # of faults 29 # of fault groups 5 # of ungrouped faults 8 Simulation time Sequential 38.5 (Mflops) Parallel 30.2 Speedup (%) 21.5

Table 6.1: Comparison of results for the operational amplifier

The second benchmark used is an active bandpass filter [Miln97] as given in figure 6.9. Its pass band is centered at 1kHz and the gain in the pass band is 0dB. The operational amplifiers are the same as the ones shown in figure 6.7. The circuit is simulated with a square wave input at frequency 1kHz and amplitude 3V. A fault list of 172 faults have been used. In this case, the fault list consists of faults at different locations. At each location one bridging fault with a resistance of 200 Ω has been injected (i.e. various resistance values of the same bridging fault have not been simulated in this case). Figure 6.10 shows the groups calculated for this circuit after the initial DC simulation, with the ungroupable faults included in the last group. 76 of the 172 faults have not been grouped. A comparison similar to the one described for the operational amplifier has been made also for this circuit. In this case, the program is executed on a HP J5600 workstation. The simulation has been performed on a flat, transistor-level netlist. The results of the CPU time comparison obtained from the simulations of this benchmark are presented in table 6.2.

An important point to note in the CPU time comparisons of the two bench- marks is that the speedup for the large circuit (active filter) has been greater than that of the small circuit (operational amplifier). This dif- ference in speedup figures seems at first glance a contradiction with the fault group distributions of the two circuits, since the grouping shows that a larger proportion of faults have been grouped for the operational ampli- fier as compared to the filter (see figures 6.8 and 6.10). However, the larger 6.8. IMPLEMENTATION AND RESULTS 175

R1

100K C1 RQ R3

25K 10K 0.01mF OPAMP1 INP RG - R2 OPAMP2 OPAMP3 100K - RF1 - + 10K OUT + 18K + C2 GND GND GND 0.01mF RF2 R4 - 18K 10K + OPAMP4

GND

Figure 6.9: Active bandpass filter benchmark

80

70

60

50

40

# of elements 30

20

10

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Group number

Figure 6.10: Fault groups of the bandpass filter speedup gain can be explained by observing that the average number of iterations a circuit takes to converge at each time step gets larger with the circuit size. In the case of the operational amplifier, even if the improved 176 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION

Circuit name bandpass filter # of linear 27 elements nonlinear 28 # of faults 172 # of fault groups 14 # of ungrouped faults 76 Simulation time Sequential 3.602e+4 (sec) Parallel 2.601e+4 Speedup (%) 27.8

Table 6.2: Comparison of results for the active bandpass filter initial guess is very close to the solution, a few iterations are still required before a formal convergence is reached. For the filter, however, because the dimensions of the solution space are larger, even a small perturbation in the input requires a larger number of iterations before convergence is reached, which causes an improvement in the initial guess become more significant in this case. Because of this, the effect of improved initial guesses can be better observed in the case of the filter. It is of course the question whether this will always be the case for larger circuits. Our expectation is that this will be valid for larger circuits provided that the grouping is done correctly.

Another important point that has to be discussed is the usage of LU update as described in section 6.6. The usage of this method, as described earlier, is based on identifying the part of the Jacobian matrix that will not change during the simulation of the whole group. The various experiments made using this method actually showed that the implementation as described here has a marginal share in speedup. The reason for this is that it is (especially in the case of a smaller circuit) very often the case that one of the faults in the group has a fault stamp that includes the beginning rows of the Jacobian making the update often impossible. The fact that this is less the case in the simulation of the filter can also be partially responsible for the larger speedup for this circuit. An alternative to this LU update method is to decompose for example 50% of the Jacobian during the representative simulation and to calculate the Jacobian first at each dependent simulation and take the difference of this dependent Jacobian 6.8. IMPLEMENTATION AND RESULTS 177

8

7

6

5

4 Iterations per fault

3

2

1 0 1 2 3 4 5 6 7 -6 Time (sec) x 10

Figure 6.11: Comparison of average number of iterations versus time for sequential and parallel simulations with the stored Jacobian. If the first half of this difference is sufficiently small, the update can be performed in such a way that only half the LU decomposition has to be done. If the difference in the first half of the Jacobian is not negligible, the full LU decomposition has to be performed. This method has the advantage that the overhead from checking for the minimal fault nodes per group is prevented. Another advantage is that the larger differences in the Jacobian which are found in some dependent faults can also be taken into account instead of assuming only the fault stamp will differ from the representative Jacobian.

One of the aspects that had to be evaluated is the effect of the improved initial points on the number of Newton-Raphson iterations. An overall decrease in Newton-Raphson iterations has been obtained in both of the circuits. To give an example, the total number of iterations in the simula- tion of the operational amplifier has decreased by 23.69%. When the faults are considered separately, it has been observed that very few faults (two in this case) had larger number of NR iterations at some time points. In figure 6.11, a comparison of the average number of iterations versus time is shown. The solid line represents the iterations reached during sequential (i.e. standard) simulation, and the dots represent the number of iterations 178 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION during the parallel simulation. The improvement of the initial points can also be seen in figure 6.12, where the simulation results of the representa- tive fault and one of the dependent circuits from the same group are given. The initial guess at each time point of a waveform is displayed with a ‘+’. It can be seen that all the initial points in the second simulation lie on the final waveform, so in cases when extreme accuracy is not required the NR iterations can even be skipped totally. However, the correctness of the groups should be checked regularly in this case since the correctness of the simulation depends then on the correctness of the groups.

6.9 Conclusions and Future Research

The results presented show that the use of DC-bias grouping with the described methods can yield an improvement of 20-30% in the simulation time. This improvement is not sufficient for a number of fault simulation tasks, but the described method can play a part in decreasing the CPU time in transient fault simulation of nonlinear circuits. It should be noted that these results are based on a flat netlist, so no gains from modeling and circuit hierarchy has been used. The results here are a good motivation for further research for combining this method with behavioral modeling and hierarchical fault simulation methods. The experiments show that the improvement in the initial points for dependent fault is responsible for the largest part in improvement, while the improvement in reuse of LU decomposition is observed to be marginal. However, this is due to the specific LU decomposition update method used (see the discussion in the previous section) rather than the unsuitability of these schemes in general. More work into the integration of linear matrix solution methods such as LU decomposition reuse or methods based on Householder’s method [Hou98] into the simulator described here can increase the speedup figures. The regrouping of faults is an issue that requires further attention and research. In the examples given in this chapter, the regrouping was not performed. This has been based on the observation that in most cases regrouping was not needed. This of course does not to mean that the oper- ating regions of the transistors in the circuit remain the same throughout the simulation, but rather that the groups of faults resulting in the same transistor operating regions remain the same in most of the cases. In the few cases that the grouping changed at some time point, the initial points 6.9. CONCLUSIONS AND FUTURE RESEARCH 179

Representative 4

3

input 2 net32 vo

1 net48 net44 net35 Voltage(V) 0 vn net54

-1

-2

-3 0 1 2 3 4 5 6 7 Time(sec) -6 x 10

Dependent 4

3

2 input net32 vo 1 net48 net44

Voltage(V) net35 0 vn net54 -1

-2

-3 0 1 2 3 4 5 6 7 Time(sec) -6 x 10

Figure 6.12: Comparison of initial points in representative and dependent fault simulations ended up further from the solution than they would have been if previous time point was taken. However, the correct solution is still reached. Hence the trade-off is between the number of cases that the operating regions change to make regrouping necessary and how far the initial guesses can end up once the grouping is not good enough any more. If the number 180 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION of such cases is small, as observed in this case, and the initial guesses get worse but the solution can still be reached (in a larger number of itera- tions, obviously) then it can be a better idea to let the simulator deal with this. The reason for this is the fact that since rechecking the grouping very often brings a larger overhead than the few extra NR iterations in the mentioned situation. However, the experiments and observations presented here are not sufficient to make a hard conclusion about whether regroup- ing is necessary. Alternatives such as grouping based on input variation or grouping based on the number of iterations of dependent circuits have to be investigated. Finally, it has to be pointed out that issues such as the unfavorable numer- ical effects of introducing faults at some locations have not been included in the simulations. All faults causing this kind of problems are also left out of the scope of the comparisons. A check on the condition number of the Jacobian matrix has been included in the implemented simulator that takes the faulty circuit out of the simulation as soon as the set of linear equations become ill-conditioned. These faults and other faults which give convergence problems are not included here so that the efficiency of the fault simulation method used can be evaluated accurately. For a practical simulator it is crucial that these and other kinds of checks on the numerical stability of the equations are performed. Some initial research on the un- reliable convergence properties of faulty circuits can be found in [Stra99]. More research on practical methods that can be used to detect these and other problematic conditions in fault simulation are required. Parts of the grouping method such as regrouping in the course of the simu- lation, other LU update methods, application of other existing linear meth- ods to nonlinear transient simulation by means of DC-bias grouping etc. will yet have to be researched in more detail, preferably using other bench- marks. Another implementation using sparse matrix methods has to be produced in order to make the tool usable on larger circuits.

6.10 Bibliography

[Augu98] J. Soares Augusto and C. F. Beltran Almeida, “Fast Fault Sim- ulation in Linear and Nonlinear Circuits with Fault Rubber 6.10. BIBLIOGRAPHY 181

Stamps,” in Proc. of the 4th IEEE International Mixed-Signal Test Workshop, The Hague, Netherlands, June 1998, pp. 28-37.

[Bokh87] W.M.G. van Bokhoven, “Piecewise Linear Analysis and Simula- tion,” in Circuit Analysis, Simulation and Design - PartII, A.E. Ruehli (Editor), pp. 129-166.

[Buur93] H.W. Buurman, From Circuit to Signal: Development of a Piece- wise Linear Simulator, Ph.D. Thesis, Eindhoven Technical Uni- versity, January 1993.

[Engi00] N. Engin and H.G. Kerkhoff, “A New Analog Fault Simulation Method Based on DC-Bias Grouping”, in Proc. of the 6th IEEE International Mixed-Signal Test Workshop, 2000, pp. 170-174.

[Forc99] C. Force, “Analog Fault Simulation: Key to Product Quality, or a Foot in the Door,” in Proc. of International Test Conference, 1999, pp. 650.

[Hou98] J. Hou and A. Chatterjee, “CONCERT: A Concurrent Fault Simulator for Analog Circuits,” in Proc. of the 4th IEEE Inter- national Mixed-Signal Test Workshop, 1998, pp. 3-8.

[Hous64] A. S. Householder, The Theory of Matrices in Numerical Anal- ysis, New York: Blaisdell, 1964.

[Kami97] B. Kaminska, K. Arabi, P. Goteti, J. L. Huertas, B. Kim, A. Rueda and M. Soma, “Analog and Mixed-Signal Benchmark Cir- cuits - First Release,” in Proc. of International Test Conference, 1997, pp. 183-190.

[Miln97] A. Milne, D. Taylor and K. Naylor, “Assessing and Comparing Fault Coverage when Testing Analog circuits,” in IEE Proc.- circuits Devices Syst., Vol. 144, No. 1, February 1997, pp. 1-4.

[Milo98] L.S. Milor, “An Introduction to Research on Analog and Mixed- Signal Circuit Testing,” in IEEE Trans. Circuits and Systems-II: Analog and Digital signal Processing, Vol. 45, No. 10, October 1998, pp. 1389-1407. 182 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION

[Nagi93] N. Nagi, A. Chatterjee and J.A. Abraham, “Fault Simulation of Linear Analog Circuits,” in Journal of Electronic Testing: The- ory and Applications, Vol. 4, 1993, pp. 345-360.

[Need98] W. Needham, C. Prunty and E.H. Yeoh, “High Volume Micro- processor Test Escapes, an Analysis of Defects our Tests are Missing,” in Proc. of International Test Conference, 1998, pp. 25-34.

[Pol 96] J.A. van der Pol, F.G. Kuper, E.R. Ooms, “Relation Between Yield and Reliability of Integrated Circuits and Application Fail- ure Assessment and Reduction in the One Digit Fit and PPM Reliability Era,” in Microelectonics Reliability, vol. 36, no. 11/12, 1996, pp. 1603-1610.

[Sale94] R. Saleh, S.-J. Jou and A.R. Newton, Mixed-Mode Simulation and Analog Multilevel Simulation, Kluwer Academic Publishers, Boston, 1994.

[Sebe95] C. Sebeke, J. P. Teixeira and M. J. Ohletz, “Automatic Fault Ex- traction and Simulation of Layout Realistic Faults for Analogue Circuits,” in Proc. of the European Design and Test Conference, Paris, France, March 1995, pp. 464-468.

[Seli92] S.Z. Selim, M.S. Kamel, “On the Mathematical and Numerical Properties of the Fuzzy c-means Algorithm,” in Fuzzy Sets and Systems, Vol. 49, No. 2, 27 July 1992, pp. 181-191.

[Stip90] M.T. van Stiphout, PLATO: A Piecewise Linear Analysis Tool for Mixed-Level Circuit Simulation, Ph.D. Thesis, Eindhoven Technical University, May 1990.

[Stra99] B. Straube, K. Reinschke, W. Vermeiren, K. Robenack, B. Muller, C. Clauss, “On the Fault-Injection-Caused Increase of the DAE-Index in Analogue Fault Simulation,” in Proc. of Eu- ropean Test Workshop, Constance, Germany, May 1999.

[Tian98] Michael W. Tian and C.-J. Richard Shi, “Efficient DC Fault Simulation of Nonlinear Analog Circuits,” in Proc. of DATE- Conference, Paris, France, February 1998, pp. 899-904. 6.10. BIBLIOGRAPHY 183

[Vari96] P. N. Variyam and A. Chatterjee, “Fast Fault Simulation of Ana- log Systems Using Polynomial Waveform Representations,” in Proc. of 2nd International Mixed-Signal Test Workshop, Quebec City, Canada, May 1996, pp. 11-16.

[Voor97a] R. Voorakaranam, S. Chakrabarti, A. Gomes, S. Cherubal, A. Chatterjee and W. Kao, “Hierarchical Specification-Driven Ana- log Fault Modeling for Efficient Fault Simulation and Diagnosis,” in Proc. of International Test Conference, 1997, pp. 903-912.

[Voor97b] R. Voorakaranam, A. Gomes, S. Cherubal and A. Chatterjee, “Hierarchical Fault Simulation of Feedback Embedded Analog Circuits with Approximately Linear to Quadratic Speedup,” in Proc. of the 3rd IEEE International Mixed-Signal Test Work- shop, Seattle, USA, June 1997, pp. 48-59.

[Xing98] Y. Xing, “Defect-Oriented Testing of Mixed-Signal IC’s: Some Industrial Experiences,” in Proc. of International Test Confer- ence, October 1998, pp. 678-687.

[Yang99] Z. R. Yang and M. Zwolinski, “Fast, Robust DC and Transient Fault Simulation for Nonlinear Analogue Circuits,” in Proc. of DATE-Conference, Munich, Germany, March 1999, pp. 244-248.

[Zwol97] M. Zwolinski, A. D. Brown and C. D. Chalk, “Concurrent Ana- logue Fault Simulation,” in Proc. of the 3rd IEEE International Mixed-Signal Test Workshop, Seattle, USA, June 1997, pp. 42- 47. 184 CHAPTER 6. A NEW APPROACH TOWARDS ANALOG FAULT SIMULATION Chapter 7

Conclusions and Recommendations

7.1 Summary of Results

In this thesis, the concept of a framework integrating design and test flow for mixed-signal IC’s has been developed. Two important aspects of the developed framework have been investigated in detail.

As presented in chapter 2, the problems involved in the testing of mixed- signal IC’s often result from the inefficient ways of using design data for generating and optimizing test programs. Specifically, the problems in the design-test link have been found in two areas:

Generation of test programs based on data from the designer and the • design files, Evaluation of generated test programs based on the design netlist. • The first problem has been discussed in chapter 3. A general framework has been suggested for bringing design and test flows closer to each other by providing a good sharing of design and test knowledge for these two actions. As an example, a framework is described in chapter 4 implementating a design-test link for automatic test program generation based on design specifications. The implemented framework has been verified on an example

185 186 CHAPTER 7. CONCLUSIONS AND RECOMMENDATIONS mixed-signal IC. The main conclusion of this experiment has been that the time gain in test debugging is expected in general to be larger than the time gain in test development. Another conclusion is that the use of the described framework makes it possible for a designer to generate tests without being familiar with the test aspects and test hardware required to carry out the measurements. Since the framework is based on the reuse of design and test effort, a full verification of the gain in efficiency has not been possible with only one example. The functioning of the implemented flow has been verified and demonstrated successfully, using a locally designed mixed-signal circuit in the ESPRIT framework.

The usage of specification-based tests is a standard customer requirement, so abandoning these tests is not an issue. However, the measurement of all required specifications does not quarantee that a large percentage of the potential are defects covered by the test. Therefore, the evaluation of the specification-based test programs is necessary to determine the fault coverage and augment the test program with more measurements if the fault coverage is not sufficiently high. This subject is treated in detail in chapters 5 and 6.

The main obstacle for defect-based test evaluation for analog and mixed- signal circuits is the prohibitively long simulation time it requires. Espe- cially simulation of tests based on transient measurements (which have an important place in mixed-signal testing) are known to be very time consum- ing and as a result it is not done in practice. A new method for treating this problem has been described in chapter 6. The method is based on the grouping of DC-bias regions of transistor in each faulty circuit and ap- plying parallel simulation at each time point for decreasing the CPU time required for the calculations for faults in the same group. The method has been implemented in a prototype fault simulator, and first results from this simulator show CPU time reduction between 20 and 30 percent. 7.2. ORIGINAL CONTRIBUTIONS OF THIS THESIS 187

7.2 Original Contributions of This Thesis

The following are the original contributions of this thesis:

An integrated design-test flow concept, based on storing and using • test and design knowledge in a systematic way, decreasing chance of test program errors and test program development time.

Implementation and verification of the described framework for proto- • type testing of mixed-signal circuits, covering design part, the design- test link and test part.

An analysis of computational complexity of using Householder’s method • for analog fault simulation.

A new method for transient fault simulation of nonlinear analog cir- • cuits with increased time efficiency.

Implementation and evaluation of a prototype fault simulator based • on this new simulation concept

7.3 Recommendations

Developing a good systematic link between design and test flows requires still more research in design and test methodology and CAD and CAT development. The current state of the art in commercial design-test link tools is focused on the test soft- and hardware. Solutions which aim at linking design and test flows such as described in chapter 4 must be pur- sued further. What has not been possible to do in the framework of this thesis is to present ‘real-life’ results from the usage of such a framework in an industrial development and production environment. This is necessary for a more realistic evaluation and improvement of the framework. Effort also has been put into making use of simulation files for test generation, however, it has been concluded that the only possible link that can be made between simulation and measurement is the ‘aim’ of the simulation. The use of test benches for analogue IC blocks can be a solution for this, since 188 CHAPTER 7. CONCLUSIONS AND RECOMMENDATIONS the aim of a simulation can be included in the testbench file at least for standard simulation types. It seems at this point a good point for future research to look for ways of including this option in a (semi-)automatic analog synthesis and verification environment. As to efficient fault simula- tion techniques, a number of open points remain with regard to the method described in chapter 6. The method described here is very flexible for the application of other efficiency-increasing techniques, however, time allowed only the evaluation of one technique, improved initial points, thoroughly. The usage of LU update in other ways has to be investigated, since the usage as we described here requires significant bookkeeping and can lead to inaccuracies. The grouping of faults is another point where more re- search must be done. Especially efficient checks that can determine the quality of grouping can be useful to determine whether or not to regroup during the course of the simulation. Last but not least, efficient checks on numerical problems that can be introduced by the faulty components have to be investigated. This can not only prevent arriving at incorrect test evaluation results, but also prevent the waste of simulation time because of convergence problems. Appendices 190 APPENDICES Appendix A

Example of a Virtual Instrument

A VI (Virtual Instrument) is a measurement program written in a graphical language, implementing a specific test procedure. Here a short overview and an example is given about the structure of a VI and how it is used within the MISMATCH framework.

A.1 Front Panel

The front panel of a LabVIEW VI (see Figure A.1) is the user interface of the programme, containing all possible buttons and screens to facilitate the control of and data retrival from the VI. In the MISMATCH frame- work, instead of the front panel, the sequencer window is used for centrally changing the test parameters and settings of the individual VI’s in the au- tomatically generated test plan. The outputs of the test are saved as log files and can be viewed by means of a special viewing VI for easy inter- pretation. However, it is also possible to open each VI from the sequencer window. In Figure A.1, the front panel of the frequency response test VI is given. The test parameters seen in figure A.1 make it possible to run the test for different blocks with similar functionality. In this example, the parameters fmin and fmax determine the frequency range over which the frequency

191 192 APPENDIX A. EXAMPLE OF A VIRTUAL INSTRUMENT

Figure A.1: Front panel of the frequency response test VI

response is obtained. The parameter points is used to control the number of frequency points to be measured. During test plan generation, test parameters fmin and fmax are derived from the design parameter unity gain bandwidth of the operational amplifier. The parameter amplitude is requested as an operating point from the designer during test selection (explained in detail in chapter 4). The parameter points has a default value and can be changed in the sequencer window if desired.

A.2 Diagram

The diagram of a VI is the internal program which contains all the com- mands in the graphical language of LabVIEW. As well as arithmetic opera- tions, the drivers used for the VXI devices can also be seen in this diagram. A.2. DIAGRAM 193

Figure A.2: Diagram of the frequency response test VI 194 APPENDIX A. EXAMPLE OF A VIRTUAL INSTRUMENT Appendix B

Mixed-Signal Test System: An Overview

B.1 Digital Tester Part

The digital tester is an ATS-200 high-speed tester from Integrated Mea- surement Systems (IMS). The tester has 96 digital channels that can apply patterns at a speed of 200 MHz. Fixture board covers the top of the tester on which the device interface board (DIB) can be mounted. In the MIS- MATCH set-up, a conventional IMS mixed-signal DIB is used. The test socket that maintains the connections to the CUT is mounted on the DIB. This socket has ATS digital pins connected to the digital pins of the CUT and IMS analogue test channels (referred from here on as ‘IMS channels’) connected to analogue pins of the CUT. Load impedances, decoupling tran- sistors and other required components are included on the DIB [Kerk97].

Besides the drivers for the analogue modules, there exists also a LabVIEW library (TestLITE) for controlling the ATS tester. It is possible to de- fine force, compare, power groups and apply sets of test vectors. In MIS- MATCH, however, the digital test software of IMS (IMS screens) was used to test the digital parts, although it would also have been possible to do it via TestLITE VI’s.

195 196 APPENDIX B. MIXED-SIGNAL TEST SYSTEM: AN OVERVIEW

Figure B.1: The mixed-signal test system

B.2 Analogue Tester Part

The analogue tester consists of a VXI mainframe with test routing mod- ules and a so-called Slot-0 controller. The VXI instruments used in this mainframe are: HPE 1445 A Arbitrary waveform generator 1 HPE 1411 B 5 2 digit multimeter HPE 1420 B 200 MHz frequency counter HPE 1428 A Digitizer (high frequency analyser) B&K 3005 Input module (low frequency analyzer) B&K 3105 Output module (low frequency arbitrary waveform generator) HPE 1472 A Multiplexer (RF-multiplexer) HPE 1465 A Multiplexer (switch matrix) The last two instruments listed above are not stimuli or measurement de- vices but multiplexers. The connections between the VXI modules and the B.3. BIBLIOGRAPHY 197 analogue test channels (IMS channels) are made via these sets of switches. The connection of the CUT to the analogue tester is made through the IMS channels. The diagram of the connections of terminals of the routing modules (RF multiplexer HPE 1472 A and matrix switch HPE 1465 A) to the VXI in- struments in our setup is given in B.2. It can be seen in this diagram that there are 13 channels in total (IMS channels 1-11, IMS channel 15 and IMS channel 16) in the front panel of the analogue tester. Channels 1 to 11 are connected via the switch matrix to several modules. The bandwidth of these channels is limited by the bandwidth of the multiplexer to 10 MHz. Channel 15 is connected via the RF multiplexer (which has a bandwidth of 1.3 GHz) to a few measurement modules for higher frequency measure- ments and channel 16 similarly to signal generation modules for sourcing high frequency signals. The positions of both the matrix switches and the switches between the banks of the RF multiplexer can be changed by mod- ifying select values through corresponding LabVIEW drivers. Thus, the connections of VXI modules to IMS channels can be modified using Lab- VIEW VI’s; however the hardwired connections shown in the figure imply a set of restrictions in these connections. These restrictions are taken into account in the routing function of MISMATCH by means of a connection list that defines the possible connections and provides for these connections the corresponding matrix row and column.

B.3 Bibliography

[Kerk97] H. Kerkhoff, R. Tangelder, H. Speek and N. Engin, ESPRIT BLR Project #8820 ‘AMATIST’ Deliverable C1.D9, MESA Research Institute, University of Twente, May 1997. 198 APPENDIX B. MIXED-SIGNAL TEST SYSTEM: AN OVERVIEW

Figure B.2: Connections of the mixed-signal test system Appendix C

Verification Test Results for the Compass Watch Buffer Macro

Figure C.1: Frequency response test result for the buffer block

199 200 APPENDIX C. VERIFICATION TEST RESULTS FOR THE COMPASS WATCH BUFFER MACRO

Figure C.2: DC sweep test result for the buffer macro

Figure C.3: Offset test result for the buffer macro at Vinp=1V

Vin (V) Voffset (mV) 1 -1.91 2 0.09 3 1.68 4 4.67 Table C.1: Offset test results for the buffer at various input voltage values Samenvatting

Het werk beschreven in dit proefschrift richt zich op het onderzoeken van nieuwe methodes voor de integratie van ontwerp en testontwikkeling proce- dures voor mixed-signal integrated circuits (IC’s). Mixed-signal IC’s wor- den toegepast in diverse elektronische systemen zoals telecommunicatieap- paratuur, audio- en video apparaten, auto onderdelen, enz. Het testen van deze IC’s geeft problemen vanwege de complexiteit van analoge functionali- teit en de niet geautomatiseerde analoge ontwerp processen. Het genereren van automatische test programma’s voor analoge componenten is een pro- bleem wat nog niet geheel opgelost is. Als eenmaal een test ontworpen is, bestaan er geen formele methoden om de kwaliteit van de ontwikkelde tests te garanderen, of hebben de bestaande methoden een grote ‘overhead’. Systematische koppelingen tussen ontwerp en test ontwikkeling processen van analoge en mixed-signal schakelingen zijn nodig om deze punten te verbeteren en een hoge kwaliteit en snelle time-to-market te garanderen. De diverse aspecten van het mixed-signal test probleem worden behandeld in hoofdstuk 2 van dit proefschrift. Een algemene discussie van de huidige mixed-signal ontwerp en test uitvoeringen, de implicaties van markt eisen op de test methodologie en de toekomstige uitdagingen voor mixed-signal testen is behandeld. Als conclusie volgt dat lange testontwikkeltijden, de- bugging tijden en gebrek aan methoden voor realistische test evaluaties tegenwoordig de belangrijkste uitdagingen zijn voor het mixed-signal tes- ten. Hoofdstuk 3 richt zich op het defini¨eren van een algemeen kader voor de in- tegratie van mixed-signal ontwerp en test activiteiten langs het IC ontwerp traject. Hiervoor worden de test overwegingen op verschillende niveaus van abstractie gegeven, en de huidige stand van de techniek en mogelijkheden in virtual testing en ontwerp-test link tools gepresenteerd. Aan het eind van dit hoofdstuk is een algemeen kader gepresenteerd voor het koppelen van mixed-signal ontwerpen en testen. Hoofdstuk 4 behandelt en implementeert een omgeving voor de integratie van ontwerp en test flows voor het prototype test van mixed-signal IC’s. De

201 beschreven omgeving (MISMATCH) is gebaseerd op het delen van ontwerp data met de test omgeving voor het genereren van specificatie gebaseerde prototype testen tijdens het ontwerp traject. De functionaliteit bestaat uit het selecteren van test methoden van een test bibliotheek, het gebruik van macro specificaties voor het selecteren van test functies, het genereren van controle signalen voor de toegang tot de te testen delen en toevoeging van informatie voor het automatisch ”routen” naar gewenste tester modules. MISMATCH is ge¨ımplementeerd gebruik makend van bestaande ontwerp en test trajecten en het uiteindelijke systeem is gebruikt voor het ontwerp en test traject van een mixed-signal IC. De bijbehorende test resultaten zijn bijgevoegd. In hoofdstuk 5 is een belangrijk aspect van de gegenereerde testen, zijnde test effectiviteit, behandeld. De manier waarop testen gegenereerd zijn in het MISMATCH kader, zoals beschreven in hoofdstuk 4, garandeert alleen dat bepaalde parameters gemeten zijn, en niet dat de procesgerelateerde fout mogelijkheden gedekt zijn. Ten einde hoge IC kwaliteit te bereiken, is het noodzakelijk dat tests de oorzaak en niet alleen het gevolg van defecten adresseren. Vanwege dit wordt in dit hoofdstuk de connectie tussen IC fabricage processen en test kwaliteit onderzocht. Het hoofdstuk besluit met de conclusie dat foutsimulatie tijdsduur ´e´envan de grootste uitdagingen is in het evalueren van de kwaliteit van mixed-signal testen. Een discussie van de bestaande methoden voor het verminderen van fout simulaties is gepresenteerd. Het doel van het verminderen van foutsimulatie tijd is behandeld in hoofd- stuk 6, waar een nieuwe methode voor het effici¨ent simuleren van fouten in mixed-signal schakelingen gepresenteerd is. Deze methode is gebaseerd op het verminderen van de complexiteit van fout simulaties door gebruik te maken van parallelle simulatie technieken. Een prototype van een fout simulator is ge¨ımplementeerd voor het controleren van de snelheidsverbete- ring van deze nieuwe methode. Dit prototype maakt gebruik van weerstand bruggen als een fout model, alhoewel deze methode ook toepasbaar is voor andere fout modellen. De experimenten gemaakt met deze prototype simu- lator zijn gepresenteerd in dit hoofdstuk. De resultaten geven aan dat een simulatietijd reductie van 30% mogelijk is ten opzichte van conventionele methodes.

202 About the Author

Nur Engin was born in Ankara, Turkey on 16th September 1970. After completing her high school education at Ataturk¨ Anadolu Lisesi in Ankara in 1988, she has spent one year in the United Kingdom with the student exchange program AFS. During this year, she has studied Physics and Ma- thematics at the Birkenhead Sixth Form College. In 1989 she has started her university education at the Electrical and Electronics Engineering De- partment of Middle East Technical University (METU) in Ankara. In 1991, she has had practical training in ASELSAN, Ankara where she has worked on the design of active filters. In 1992, she has been two months at the Helsinki University of Technology in Finland where she worked within a project on the technical aspects of housing in future. In July 1993, she has completed her Bachelor’s Degree (B.S.) at METU, with a graduation project on the usage of neural networks in the control of highly unstable systems. In September 1993 she began her Master of Science (M.Sc.) study on VLSI testing. Her M.Sc. work was the development of a genetic algo- rithm for inserting test points in combinational digital circuits. During her M.Sc. study, she has worked as a teaching assistant, supervising practical courses on circuit theory and digital signal processing. After completing her M.Sc. degree in January 1996, she has started to work towards her Ph.D. degree at the department ICE at the University of Twente in Enschede, the Netherlands. Her Ph.D. research has been about design-test integration for mixed-signal integrated circuits. In the summer of 1998, she has been on a practical training period for three months at Philips Semiconductors in Nijmegen, the Netherlands, where she has wor- ked on the usage of defect-oriented testing in an industrial design and test flow. This thesis summarizes the results of her Ph.D. research. Nur Engin is currently working for Philips Research Laboratories in Eind- hoven, the Netherlands.

203 204 Index

ATE parametric, 119 hardware, 31 probability, 122 software, 31 representative, 162 ATPG, 6, 35 simulation, 37 attribute sequence, 166 soft, 119 structural, 119 built-in self-test, 33 weight, 123 fault simulation, 124 control signal, 69 complexity, 132 cost, 28 final test, 20 critical area, 121 fuzzy c-means clustering, 165 DC-bias grouping, 157, 162 Householder’s method, 140 defect, 25 defect level, 25 Jacobian matrix, 129 design debug, 18 design evaluation, 18 LU decomposition, 130 design for testability, 4, 32 backward substitutions, 131 design parameter, 68 forward substitutions, 131 design-test link, 40, 50, 66 state of the art in, 63 macro, 17 device interface board, 22 loading, 52 MISMATCH, 75, 81 fault, 25 CAD flow, 82 catastrophic, 119 CAT flow, 88 coverage, 26 compass watch, 94 weighted, 124 analog part, 96 dependent, 162 test results, 105 extraction, 121 control signal generation, 90 hard, 119 control signals, 77 modeling, 37 design and test flow, 77

205 design for testability, 80 generic, 61 DfT blocks, 82 instance, 61 digital macros, 93 test function, 22 framework, 76 test parameter, 69 mixed-signal IC example, 94 test plan, 23 mixed-signal macros, 92 generation, 65 simulation link, 83 test program, 22 test database, 77 evaluation, 37, 66 test selection, 77 testing error test set, 77 type 1, 26 selection, 86 type 2, 26 tester routing, 77, 90 time-to-market, 30 mixed-signal IC transistor region grouping, 166 architectures, 16 blocks, 16 virtual instruments, 31, 79 modified nodal analysis, 126 virtual test, 32, 60 state of the art in, 64 Newton-Raphson iterations, 127 convergence, 131 wafer test, 20 quadratic rate of, 136 yield one-step relaxation, 163 process, 25 operating condition, 68 test, 25 production test, 20 prototype test, 18 PWL analysis, 159 quality, 24 rank one update, 161 region attribute, 166 set covering problem, 87 simulation-test link, 58 specification coverage, 117 test effectiveness, 116 test configuration, 61

206