Multi-Attribute Design for Authentication and Reliability (MADAR)

Dissertation

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

in the Graduate School of The Ohio State University

By

Matthew J. Casto, B.S.E.E., M.S.

Graduate Program in Electrical and Computer Engineering

The Ohio State University

2017

Dissertation Committee

Dr. Waleed Khalil, Advisor

Dr. Steven Bibyk

Dr. Marvin White

1

Copyrighted by

Matthew J. Casto

2017

2

Abstract

Increased globalization of design, production, and independent distribution of integrated circuits (ICs) has provided adversarial and criminal opportunity for strategic, malicious, and monetary gain through counterfeiting, cloning, and tampering, producing a supply chain vulnerable to malicious or improper function and degraded reliability. Military, commercial avionics, medical, banking, and automotive systems rely on components providing high security, high reliability operation, and the impact can be large in terms of safety, readiness, mission success, and overall lifecycle cost when tampered parts find their way into the supply chain. Likewise, commodity platforms, such as the Internet of Things

(IoT), rely on each networked component providing trustworthy authentication and identification, which has proven to be extremely vulnerable to cloning and spoofing when implemented through software or firmware solutions. Across these platforms, major effort has been focused on enhancing hardware assurance through intrinsic and unique physical hardware traits. Previous hardware authentication and identification techniques have targeted digital solutions that require increased logic overhead in order to obtain adequate uniqueness, have a limited number of implementation architectures, and suffer from significant environmental instabilities.

In this work, the process-induced variation response of analog mixed-signal (AMS) circuits is investigated to yield foundational anti-counterfeiting, anti-cloning, design and

ii characterization techniques. It explores unique behaviors termed Process Specific

Functions (PSFs) to identify and group circuits of the same pedigree and provide traits for authentication, individual chip identification, and reliability monitoring. PSFs are demonstrated through the expansion of fundamental quantization sampling theory to produce a statistically bounded digital to analog converter model as implemented within a transmitter architecture. Simulation capabilities showed predictable circuit traits, including random process variations for authentication and unique ID. The model showed

90% Probability of Detection (PoD) with less than a 10% false alarm rate for an individual process specific cloning scenario, demonstrating foundational design capability for AMS counterfeit prevention and identification. The work makes significant progress towards quantifying design specific authentication behavior for the first time in analog ICs. A parameter space of harmonic amplitude responses is correlated to random and systematic process variations to produce challenge driven non-linear quantifiable and measurable distribution responses. These unique authenticity and reliability characteristics are related to physical process models in a low power 90nm CMOS, and are expanded for unique identification in a 130nm SiGe process technology. Collectively, this work provides an in- situ novel and foundational analog integrated circuit (IC) supply chain risk management

(SCRM) and hardware security design framework.

iii

Dedication

To my wife Erica and my “Engineers in Training”, Max, Peyton, and Ivan

iv

Acknowledgments

I have more people to acknowledge and thank than I would ever have room to do justice.

I would first like to acknowledge my family, who have not only been patient and supportive through this process, but have also provided the daily motivation and drive. I would like to acknowledge my sister for always having the courage to overcome and persevere. By doing so, she has given me one of the greatest gifts anyone could receive, the power of perspective! To my parents, for making sure I understood the value of hard work, and for always leading by example. To my Grandfather for teaching me what it meant to be an engineer. To my colleagues Len, Chris, Dave, Vipul, and my classmates that are now my colleagues, Jamin, Luke, Matt, and Sam. I am amazed and privileged to have had the opportunity to work with you. The contributions to the research would not have been possible without your critical evaluation, commitment to integrity, and availability to explore ideas. Special thanks to Dr. Luke Duncan for providing the hardware for analysis, for the extensive work on test automation, and for his persistent expertise in digital to analog conversion that laid a strong foundation to build on. To Aaron Jennings for taking a deep dive off the statistics cliff with me to bring all the principles of the work together.

To Dr. Greg Creech for being forward looking to the need for hardware security well before the community caught up. To Dr. Steve Bibyk for your leadership and pathfinding in hardware security metrics. To Dr. Marvin White for your philosophy on the re-invigoration

v of mathematics in the engineering discipline that allowed me to outside the box. To all of my committee members for the knowledge you passed on through classes, discussion, and feedback that enabled a successfully body of work. To Dr. Brian Dupaix for his commitment to excellence, a constant springboard for ideas, and the lighthouse that navigated my research goals. To my advisor, Dr. Khalil, for allowing me to venture off the beaten path, for having the experience and wisdom to identify areas of convergence, and for tireless commitment to devote the time, patience, and communication to develop and demand expertise.

Thank you!

vi

Vita

2003 ...... B.S.E.E (Summa Cum Laude), Wright State University

2003-Present ...... Senior Electronics Engineer, Air Force Research Lab

2005...... M.S. Engineering, Wright State University

Publications

Thesis

 Casto, Matthew James. 2005. AIGaN/GaN HEMT temperature-dependent large-

signal model investigation, generation, and simulation. Thesis (M.S.)--Wright

State University, 2005

Patents Pending

 M. Casto, B. Dupaix, W. Khalil, Mixed Signal Process Specific Function,

Application No. 15/729,87, October 2017

Conference Proceedings

 M. Casto, B. Dupaix, W. Khalil, “Multi-Attribute Design for Authentication and

Reliability”, Government Microcircuit Applications & Critical Technology

Conference (GOMACTech)2015, 2015-March

vii

 J. McCue, M. Casto, J. Li, P. , W. Khalil "An Active Double-Balanced

Down-Conversion Mixer in InP/Si BICMOS Operating from 70–110 GHz", IEEE

Compound Semiconductor Integrated Circuit Symposium (CSICS), pp. 1-4, 2014.

 M. Casto, M. Lampenfeld, P. Jia, P. Courtney, S. Behan, P. Daughenbaugh, R.

Worley, "100W X-Band GaN SSPA for Medium Power TWTA Replacement",

Wireless and Microwave Technology, 12th IEEE Conference on, Apr. 1–4, 2011.

 T. Quach, M. Casto, et. al, “Advanced Digital Beamformer Using Silicon

Germanium Technology” Government Microcircuit Applications & Critical

Technology Conference (GOMACTech) 2010, pp. 443-446, 2010-March

 M. Casto, S. Dooley, "AlGaN/GaN HEMT temperature-dependent large-signal

model thermal circuit extraction with verification through advanced thermal

imaging", 2009 IEEE WAMICON Dig., pp. 1-5, 2009-Apr.

 T. Quach, G. Creech, M. Casto, K. Groves, T. James, A. Mattamana, P. Orlando,

“Widely Tunable SiGe MMIC-Based X-Band Receiver” Government

Microcircuit Applications & Critical Technology Conference (GOMACTech)

2009, pp. 99-103, 2009-March

 M. Turowski, S. Dooley, A. Raman, M. Casto, "Multiscale 3D thermal analysis

of analog ICs: From full-chip to device level", 14th International Workshop on

Thermal Investigation of ICs and Systems, pp. 64-69, 2008.

 K. Groves, G. Subramanyam, T. Quach, R. Neidhard, M. Casto, P. Orlando, A.

Matamana, "Design of a frequency agile X-band LNA using BST varactor based

viii

voltage tunable impedance matching networks", 2008 IEEE AP-S Int. Antennas

Propag. Soc. Symp., pp. 1-4, 2008-Jul.

 M. Turowski, S. Dooley, P. Wilkerson, A. Raman, M. Casto, “Full-chip to device

level 3D thermal analysis of RF integrated circuits” Thermal and

Thermomechanical Phenomena in Electronic Systems, 2008. ITHERM 2008. 11th

Intersociety Conference on, pp. 315 – 324, 2008-May

 D. Stevens, G. Subramanyam, K. Koss, M. Casto, R. Neidhard, K. Pasala, S.

Schneider, J. Radcliffe, H. Griffith, “A periodically perturbed coplanar wave

guide transmission line leaky wave antenna” Antennas and Propagation, 2007

IEEE International Symposium on, pp. 465-468, 2007-June

Fields of Study

Major Field: Electrical and Computer Engineering

ix

Table of Contents

Abstract ...... ii

Dedication ...... iv

Acknowledgments...... v

Vita ...... vii

List of Tables ...... xiv

List of Figures ...... xv

Chapter 1. Introduction ...... 1

1.1 Threat space taxonomy ...... 5

1.2 Hardware Assurance Principles ...... 7

1.3 Authentication and Reliability in Integrated Circuits ...... 10

1.2.1 Physical One-Way or Unclonable Functions (PUFs) ...... 11

1.2.2 Electromagnetic Signature Technology ...... 12

1.2.3 RF-DNA Fingerprinting...... 13

1.2.4 Reliability Prediction ...... 14

1.4 State of technology solutions ...... 15

Chapter 2. Authentication and Identification in AMS ICS ...... 17

2.1 Process Specific Function (PSF) Authentication ...... 17

x

2.2 PSF Unique Identification...... 19

2.3 CMOS Technology Considerations ...... 20

2.4 AMS System Decomposition ...... 21

Chapter 3 Digital to Analog Converter PSF ...... 24

3.1 Basic DAC Operation ...... 24

3.2 DAC Output Waveforms: ...... 27

3.3 Output waveform amplitude decomposition ...... 34

3.4 Finite sampling resolution ...... 36

Chapter 4. DAC Implementation Model ...... 39

4.1 DAC Architecture ...... 39

4.2 Variation Effects ...... 41

Chapter 5. DAC Authentication and Unique ID Framework ...... 47

5.2 Authenticity – Probability of Detection (PoD) ...... 59

5.2.1 ROC Curve Analysis...... 59

5.3 Model Comparison: Authentic vs. Suspect ...... 62

5.3 Unique Identification ...... 66

Chapter 6 Process Specific DAC implementation ...... 68

6.1 Process Considerations ...... 68

6.2 90nm DAC design and simulation ...... 69

xi

6.3 Authentication and unique ID against model ...... 72

6.4 130nm Hardware Characterization ...... 75

6.4.1 Test set up ...... 76

6.4.2 Measurement resolution ...... 77

6.4.3 DAC Resolution vs Accuracy ...... 80

6.4.4 Oversampling Processing Gain ...... 83

6.4.5 Spectrum Analysis Considerations ...... 83

6.4.6 Implementation example ...... 84

6.5 Hardware Measurement Collection and Processing ...... 87

6.5.1 Authenticity verification ...... 88

6.5.2 Unique ID Classification...... 89

6.5.3 Changing amplitude mean through challenge-response ...... 95

Chapter 7. Application of PSF Framework for Reliability ...... 98

7.1 PSF for Reliability Framework ...... 98

7.3 Step-Stress Reliability Characterization ...... 102

7.3.1 First-pass DAC reliability analysis ...... 104

7.3.2 Bias Cascode Transistor (M2) Failure Mode: ...... 105

7.3.3 Switch Pair (M3,M4) breakdown failure mode: ...... 106

7.3.4 Reliability Summary ...... 109

xii

Chapter 8. Conclusion and Future Work ...... 113

8.1 Research Summary ...... 113

8.2 Future Work ...... 114

Bibliography ...... 115

xiii

List of Tables

Table 1. Commercial vs. Military ICs [6] ...... 2

Table 2. Top-5 Most Counterfeited Semiconductors in 2011 [12] ...... 4

Table 3. Counterfeit avoidance methods [3] ...... 11

Table 4 Critical t-values for sample size and confidence for two-sided t-test [69] ...... 54

Table 5. 90nm statistical parameter data and as designed Z-scores ...... 74

Table 6. Input and Measurement Data ...... 87

Table 7. Initial data collection, exploration, and quality assessment tools ...... 88

Table 8. Measured vs. Theoretical mean values of harmonic data ...... 88

Table 9. Z-Scored Transformed Harmonic Data ...... 89

Table 10. Z-Score Transformation, covariance matrix in tabular form ...... 90

Table 11. Harmonic Principal Component Upper and Lower Bounds ...... 91

Table 12. Bias Cascode Transistor Time to Breakdown analysis ...... 105

Table 13. Switch Pair Failure Analysis ...... 108

xiv

List of Figures

Figure 1. Global Semiconductor Manufacturing [2] ...... 1

Figure 2. Commercial vs. Military Development ...... 3

Figure 3. IHS Reported Global Counterfeit Incidents [10] ...... 4

Figure 4. Supply Chain Vulnerabilities [3] ...... 6

Figure 5. Delay PUF: two paths with the same layout produce a unique response [25] . 12

Figure 6. Time, spectral measure of 2 WLAN cards from three manufacturers [29] ...... 13

Figure 7. Statistical metrics of 500 Fingerprints for 40 individual devices [32] ...... 14

Figure 8. Unique Process Specific Function block diagram general architecture ...... 18

Figure 9. Unique PSF Random Uniform distribution measurement space [Y, Z] ...... 19

Figure 10. Impact of RDF on threshold and channel dopants in MOSFETS [47] ...... 21

Figure 11. Transmitter Chain (a) block diagram (b) component level waveform tracking

[50] ...... 23

Figure 12. Increased transmit dependency on DAC ...... 23

Figure 13. Basic DAC Operation and output spectral content...... 25

Figure 14. Foundational ideal DAC Spectrum ...... 25

Figure 15. Physical DAC spectrum ...... 26

Figure 16. Bit Transfer function for 3-bit Binary DAC ...... 28

Figure 17. Bit transfer function square-wave pulse train representation ...... 28

xv

Figure 18. Quantized bit waveform response extraction with x(t)=sin(wot) ...... 32

Figure 19. DAC bit waveforms ...... 33

Figure 20. Harmonics vs. resolution for binary DAC ...... 34

Figure 21. Harmonic Amplitude contribution by MSB ...... 35

Figure 22. MSB contribution to each harmonic ...... 35

Figure 23. Finite resolution in sampling waveforms ...... 36

Figure 24. Sampling criteria for N-bit DAC based on waveform pulse width, 6 MSB .... 37

Figure 25. Log sampling criteria for N-bit DAC based on waveform pulse width ...... 38

Figure 26. Binary Current Steering DAC ...... 39

Figure 27. Bit waveforms with variation ...... 42

Figure 28. 3-Bit DAC harmonic variation effects total contribution across MSBs ...... 42

Figure 29. Mathematical operation of the sum of normal distributions ...... 43

Figure 30. Probability distribution of normalized amplitude with MSB 1 variation ...... 43

Figure 31. 3- Bit Binary Current Steering DAC Architecture with variation effects ...... 45

Figure 32. DAC spectrum parameter illustrated defined across variation ...... 47

Figure 33. Distribution for t-test, two-tail probability statistics ...... 54

Figure 34. Analysis of n=16 suspect samples with ≤ 5% Type I error ...... 55

Figure 35. Feature vector distribution space ...... 56

Figure 36. Measurement vector overlaid with expected distribution definition ...... 57

Figure 37. Z-Score two-tail distribution confidence ...... 57

Figure 38 Acceptance and failure values based on harmonic test and threshold...... 58

Figure 39. Normalized PSF Spectrum Parameter Space ...... 61

xvi

Figure 40. Authentication Harmonic Parameter Comparison Space ...... 63

Figure 41. Authentication test for (P||Y)F ...... 64

Figure 42. Authentication test for (P||Y)5th Δ (P||Y)7th ...... 65

Figure 43. Representative Unique ID Parameter space ...... 67

Figure 44. Basic Current Steering DAC (a) implemented and (b) threshold distribution in 90nm CMOS...... 69

Figure 45. 90nm DAC current cell distributions of (a) LSB, (b) MSB 2, and (c) MSB ... 70

Figure 46. DAC waveform output at 1.145 GHz ...... 71

Figure 47. DAC waveform output at 331 MHz ...... 71

Figure 48. DAC output spectrum of 37MHz signal clocked at 4 GHz Full input/output 72

Figure 49. DAC Output expansion of 37MHz signal ...... 73

Figure 50. 90nm CMOS design harmonic parameter distributions with individual design mapped for authentication ...... 74

Figure 51. Die Photograph of DAC [74] ...... 75

Figure 52. Precision DAC Measurement Setup ...... 76

Figure 53. Model of mid-rise quantizer (a) function and (b) quantization error signal

[50][58] ...... 78

Figure 54. Probability density function of uniformly distributed quantization error ...... 78

Figure 55. Harmonic content of DAC with overlaid white noise model ...... 79

Figure 56. Static performance specs of a 3-bit DAC [40] ...... 81

Figure 57. DAC distortion and SNR spectrum analysis measurement [47] ...... 84

Figure 58. DAC Output spectrum, Bandwidth from 0 to fs/2 ...... 85

xvii

Figure 59. DAC output measurement spectrum with resolution bandwidth of 50 kHz ... 86

Figure 60. Spectrum analyzer data measured on DAC, f=33.3 MHz and fs=3.35 GHz .. 87

Figure 61. PCA: All Bins and Devices ...... 92

Figure 62. PCA for Input Frequency 33.33 MHz ...... 93

Figure 63. PCA for Input Frequency 64.82 MHz ...... 93

Figure 64. PCA for Input Frequency 133.6 MHz ...... 94

Figure 65. PCA for Input Frequency 334.7 MHz ...... 94

Figure 66. PCA for Input Frequency 669 MHz ...... 95

Figure 67 Mean value of harmonic amplitude vs. input challenge frequency ...... 96

Figure 68. Unique Process Specific Function reliability analysis structure ...... 98

Figure 69. Reliability signature parameter over time relative to mean at t0 ...... 99

Figure 70. Reliability signature parameter with changing distribution mean over time .. 99

Figure 71. Improved current Steering DAC cell [50] ...... 101

Figure 72. Analog Mixed Signal Reliability DUT board ...... 102

Figure 73. Reliability single channel test station [84] ...... 103

Figure 74. Rolloff of bit waveform through increased stress ...... 104

Figure 75. Cascode Bias device voltage simulation ...... 106

Figure 76. Switch Pair voltage simulation values ...... 107

Figure 77. Switch Pair Measured SFDR Reliability Analysis ...... 109

Figure 78. Harmonic parameters changing over time, t0 to t4, through degradation ..... 110

Figure 79. Reliability signature parameter measured over time ...... 111

Figure 80 Reliability signature parameter measured on any initially authenticated part 112

xviii

Chapter 1. Introduction

The over $300 billion [1] Integrated Circuit (IC) market continually grows in complexity, performance, capacity, and availability. These advancements are often countered by the reduction in reliability and trust, as a majority of circuit intellectual property (IP) design and integrated circuit (IC) fabrication activities are outsourced to third-party design houses and foundries. Specifically, the proliferation of commercial and consumer products containing ICs, through economies of scale, have led to rapid semiconductor growth in

Asia as shown in Figure 1.

Figure 1. Global Semiconductor Manufacturing [2]

1

The globalized and highly distributed nature of semiconductor foundries and third-party IP entities results in a semiconductor supply chain which exhibits several vulnerable points during the design, fabrication, and deployment phases of an IC. Exploitation of these vulnerabilities by an adversary through IP/IC piracy attacks may introduce security or trustworthiness threats including reverse engineering, counterfeiting, cloning, insertion of hardware Trojans, fault injection, and enhancement/extraction of side-channels [3].

Military, avionics, medical, banking, and automotive systems rely on high security, high reliability components making the impact significant in terms of safety, readiness, mission success, and overall lifecycle cost when tampered and vulnerable parts find a way into the supply chain [4][5].

ICs specifically produced for military applications make up a small percent of the total IC market, ranging from 0.1 to 1%. Due to these low volume requirements, the military is not capable of meaningful influence, resulting in a larger disparity between commercial and military requirements, as noted in Table 1 and in the development timeline in Figure 2.

Commercial Military System Life < 5 years 20-40 years Quantities Very High Volume Very Low Volume (1,000,000’s) (100’s to 1000’s Fab Production Approx. 2 years Decades Life Environmental 0 to 70◦C -55 to 125◦C Reliability/Quality Lower (~10 years, non- High (Hostile) hostile) Market Share >90% 0.1%-1%

Table 1. Commercial vs. Military ICs [6]

2

Figure 2. Commercial vs. Military Development

Most IC fabrication and considerable intellectual property (IP) design is performed overseas, while domestic, trusted foundry, options are often 2 to 3 process nodes (3 to 8 years) behind state-of-the-art (SoA), with 50% higher cost [7]. DoD investment into the

Trusted Foundry Program, totaling $1.2B through FY13 [8], has not guaranteed a continued onshore supply of advanced node technology. IBM, the US provider of 65nm,

45nm, and 32nm technology through the Trusted Access Program Office (TAPO), recently paid Global Foundries, owned by the Emirate of Abu Dhabi, $1.5B to take over its manufacturing operations which had become unprofitable [9]. In this environment, low volume high reliability systems, often using single wafer part quantities, are forced to utilize commercially available foundries and part suppliers in order to maximize performance and reduce cost, providing adversarial tampering and financial counterfeiting opportunity.

3

Figure 3. IHS Reported Global Counterfeit Incidents [10]

Since 2001, The Electronic Resellers Association International (ERAI), along with

Information Handling Services (HIS) Inc. has been monitoring and reporting counterfeit component statistics. Recent data, illustrated in Figure 3, shows that reporting of counterfeit parts has quadrupled since 2009. Within a recent 24-month period, U.S.

Customs and Border Patrol seized over 1.6 million counterfeit semiconductor chips.

Among the top five counterfeit components, as listed in Table 2, analog ICs are the largest source of counterfeits incidents with no technology solution available to adequately counter the threat [11]. These areas, collectively, represent an integrated circuit application space that generated $169B in revenue in 2011, with $47.7B specifically in the analog market.

Rank Commodity Type % of Reported Incidents 1 Analog IC 25.2% 2 Microprocessor IC 13.4% 3 Memory IC 13.1% 4 Programmable Logic IC 8.3% 5 Transistor 7.6%

Table 2. Top-5 Most Counterfeited Semiconductors in 2011 [12]

4

A February 2012 investigation by the General Accountability Office (GAO), commissioned by The U.S. Senate Armed Services Committee [13], reported that multiple military systems have been affected by counterfeit parts. In fact, 16 different types of military-grade parts ordered in a blind test could not be confirmed as legitimate after extensive testing. These items included suspect memory chips for the L3 displays in the

C-27J and C-130J aircraft cockpit, counterfeit electromagnetic interference filters on

Raytheon's FLIR system and targeting laser for the SH-60B Hellfire missile, counterfeit

Xilinx parts on Boeings ice detection modules for the Navy’s P-8A Poseidon anti-sub aircraft, and counterfeit memory chips on Lockheed Martin’s Terminal High Altitude Area

Defense system (THAAD)[14]. It is clear that conventional best practices, like buying from established sources, have proven to be inadequate to ensure trust in the supply chain.

Estimates indicate that legitimate electronics companies lose about $100B of global revenue annually due to counterfeits [15].

1.1 Threat space taxonomy

The classification of “counterfeit” encompasses many concerns in the design, acquisition, and deployment of electronics. Counterfeiting makes the supply chain vulnerable to malicious or improper functionality, unauthorized access, and potential unreliability.

Figure 4 shows the lifecycle of an integrated circuit with the potential vulnerability insertion points.

5

Figure 4. Supply Chain Vulnerabilities [3]

There are potential issues associated with each transition through the supply chain. A

Hardware Trojan may be inserted into a design, where an attacker in the design house or foundry adds malicious, or modifies existing, circuits to force the device to behave differently under certain conditions or act as a backdoor for information leakage to an adversary [5]. IP can be stolen for future use in different foundry processes. An untrusted foundry may take steps to produce different types of counterfeit circuits or implement malicious processes to degrade performance or decrease reliability. Changes to the characteristics may be inserted through thinning of lines, changing doping concentrations in active areas, and other manufacturing process variations. Once produced, they may be tampered with during assembly or be improperly handled through distribution, use, and disposal, resulting in reinsertion of counterfeit parts back into the supply chain.

Commodity platforms such as the Internet of Things (IoT), smartphones, and commercial global positioning satellite systems are equally challenged by the same threats and vulnerabilities as high reliability and high security systems. In an IoT system, reliance is

6 on each networked component providing trustworthy authentication and identification, which has proven to be extremely vulnerable to cloning and spoofing when implemented through software or firmware solutions [16]. These applications face a dynamic and increasing base of potential cloning applications through the ubiquity of software defined radios, where software driven waveforms can be mimicked [17][18]. Solutions to these problems require exploitation of foundational hardware science and technology to apply effective supply chain risk management (SCRM), authentication, unique identification, and reliability solutions.

1.2 Hardware Assurance Principles

In order to present a solution that fully addresses the dynamic threat and analysis space documented in the previous section, it is critical to define key characteristics, technology challenges, and available solutions that could be leveraged or expanded. Statistical

Boundary conditions and goals are established in order to define and provide practical techniques and methodologies for design and analysis. Through a deeper look at the threat and taxonomy it was determined that a solution must be such that changes made to a design at an uncontrolled foundry location could be identified. These changes could include foundry process modifications or functional circuit modification. The assumption is made that any potential adversary (malicious or monetary) would have access to the complete mask layout gds II file, and therefore any proposed technique needs to provide value regardless of reverse engineering capabilities. With these first principles in mind, the proposed solution is foundationally based on in-situ architected design process specific performance and variation.

7

The attributes and challenges are summarized through three characteristics:

1. Physically unique pedigree: As technology nodes advance with Moore’s Law, now

approaching 10 nm, processing capabilities struggle to deal with variability and yield

relationships for smallest node implementations [19]. Transistor scaling reduces the

number of atoms in the device channel, magnifying the effect of even small variations

[20]. Although concerning for efforts to improve yield, it provides an opportunity

for chip identification. Based on state-of-the-art methods for reducing process

variation, it is clear that identification can clearly be achieved on a statistical basis.

Advanced node silicon will continue to provide a unique opportunity to take

advantage of inherent variability to produce statistically significant circuit traits.

2. Authentication through Challenge-Response Behavior: Inspection or interrogation

techniques to analyze the trustworthiness of an integrated circuit cannot be

cumbersome to the point where it is not cost or time effective. Methods need to be

relatively quick, non-destructive, and sensitive to the degree by which the uniqueness

of the process variability was exercised. To prevent targeted modification, where an

external monitor circuit is not altered but the information carrying IC is, the

authentication process should consist of challenge(s) applied to the actual circuitry

that is planned to be used in the system. The response from the circuit is then used

to complete the chip level authentication pair.

3. Signature abstraction for identification and reliability monitoring: Current and future

mixed-signal, RF sensor, and communication systems require the increased

8

performance of next generation highly integrated electronics. Transistor scaling and

new materials continue to drive increases in performance and reductions in size,

weight, and power. However, scaling lowers breakdown voltages, which impacts the

device power handling capability, and new materials introduce significant

uncertainty in operating limits and overall component lifetime. As a result, potential

system failures may occur due to unsafe operating conditions and unknown material

degradation. There is a need for high quality and timely assessment of electronics

reliability to trade performance specifications against useful life. Reliability studies

are costly, primarily due to existing modular systems where a single reliability test

channel is needed with multiple expensive power supplies and meters. Analyzing

information from performance traits and unique fingerprints can provide quick and

accurate correlation to models of reliability and lifetime prediction in operating

electronics. These need to be based on physical traits and be relatable to advanced

simulation and modeling for predictive capability.

Levels of integration continue to increase and so does the complexity of integrated circuits.

In order to address the largest class of counterfeits, research solutions were focused on the analog mixed-signal (AMS) domain, and specifically on the ability to authenticate and uniquely identify mixed-signal integrated circuits using process variation. The techniques put forth in the remaining sections are, to a large degree, orthogonal to traditional technology development, introducing methodologies that exploit characteristics in novel ways. When trustworthiness is considered, there may be instances where performance and cost may be traded for increased security. An exhaustive review of the techniques available

9 to tackle the challenges presented in the previous section was executed. This included a literature search, investigation of current industry and government funding to develop solutions, and discussions with identified leaders in the fields of circuit authentication and reliability resulted in the research strategy discussed in the following chapters.

1.3 Authentication and Reliability in Integrated Circuits

To deal with the proliferation of counterfeit and potentially malicious ICs, physical inspection techniques are often deployed at the expense of long testing time [16].

Conversely, avoidance techniques, such as Hardware Intrinsic Security (HIS), have emerged to address the counterfeit problem from the design stage forward [5]. HIS methods such as Hardware Metering, Physically Unclonable Functions (PUFs), and Physical Layer

Authentication exploit unique IC features, such as process variation, to identify individual chips [21]. Methods have also been proposed to embed a unique ID into the chip during or after fabrication and test, such as hardware metering and secure split test. Split fabrication feasibility is also being investigated, where the layout of the design is split into front

(FEOL) and back (BEOL) end of line layers, which are fabricated separately in different foundries to provide a level of obfuscation. Table 3 is a comparative summary of the current counterfeit avoidance techniques. Specific physical hardware authentication, unique ID, and reliability techniques are expanded on in following sub-sections to compare and contrast and evaluate state-of-the-art technology solutions. These include physical unclonable functions (PUFs), electromagnetic signature technology, RF-DNA fingerprinting, and predictive lifetime.

10

Tamper Area Target Target Cost Avoidance Reliability Uniqueness resistance Overhead Counterfeit Compoents Technique Types Physically Medium High High Low Remarked, Digital ICs Med Unclonable overproduced, Functions (PUF) cloned Hardware Medium High Medium Low/Med Overproduced, Digital ICs High Metering cloned Secure Split Test N/A N/A Medium Medium Overproduced, Digital ICs High (SST) out-of- spec/defective, cloned Combating Medium N/A High Low Recycled, Digital ICs Low Die/IC Recovery remarked (CDIR) Poly fuse-based Medium N/A High Medium Recycled, Digital ICs Medium technology for remarked recording use time Electronic chip High High Not Low Remarked Digital ICs Low ID (ECID) Verified DNA Markings Medium Medium Medium N/A Recycled, All High remarked (Dig/Analog/ RF, etc.) Nanorods Not Medium Not N/A Recycled, All Not Verified Verified remarked verified Magnetic PUF Not High High N/A Recycled, cloned All Not Verified verified Table 3. Counterfeit avoidance methods [3]

1.2.1 Physical One-Way or Unclonable Functions (PUFs)

The concept of Physically Unclonable Functions (PUFs) was originally introduced under the premise of Physical One-Way Functions, when a unique scattering response was obtained by shining a laser on a bubble-filled transparent epoxy wafer [22]. This concept has since been used to develop and exploit physical randomness functions for silicon devices, making use of the manufacturing process variations in modern ICs for identification and on-chip key generation [21]. These variations are uncontrollable and unpredictable, making them unique and potentially unclonable. A unique digital signature is generated for each IC in a challenge-response form that can be stored to allow later ID of genuine ICs. Various PUF architectures have been proposed including the arbiter PUF, the ring oscillator PUF, and memory based PUFs, such as the SRAM PUF [23][24]. The

11 arbiter PUF computes a one-way function from propagation delays, as in Figure 5, that varies randomly with process variations [25]. Statistically measuring this delay related to threshold and lithography effects has recently been proposed as a technique for chip foundry identification [26].

Figure 5. Delay PUF: two paths with the same layout produce a unique response [25]

Only recently has an analog flavor immerged in regards to PUFs, where a modified threshold comparison was proposed that used an arbiter PUF to examine the difference in a series of mirrored current structures by transferring the delay to set the comparison value of the digital response bit [27].

1.2.2 Electromagnetic Signature Technology

Electromagnetic signature or spectrum analysis and identification has been effectively used for many years to do battlefield management [28]. Unintentional Modulation on Pulse

(UMOP) features, which are unavoidable and unique to individual emitters, are used to match received signals with specific emitters. Radars in the same family, and constructed by the same manufacturer, have been shown to exhibit similar fine frequency intra-pulse structure. This technique was expanded on and showed feasibility for unique identification of various WLAN 802.11b cards [29]. Emissions are captured with visual differences between manufacturers such as depth of nulls, spectral content, and amplitude between symbols, as shown in Figure 6.

12

Figure 6. Time, spectral measure of 2 WLAN cards from three manufacturers [29]

Overall, time and frequency domain signatures demonstrated noticeable and quantifiable features using signal processing metrics such as cross correlation. Furthermore, techniques have been proposed to estimate the power series coefficients of nonlinear devices in the

Radio Frequency (RF) front end by observing the changes in spectral regrowth based on input power, which can then be used to identify detectable radio emitters [30].

1.2.3 RF-DNA Fingerprinting

Radio-frequency distinct native attribute (RF-DNA) fingerprinting has been proposed as a physical-layer technique to enhance the security of various wireless communications devices such as RFID, and 802.11 Wi-Fi [31]. RF-DNA, possessed by the unintentional emissions of ICs, has shown promising empirical results demonstrating the suitability of

RF-DNA fingerprinting for both identification and verification of device operation tasks

[32]. Semiconductor-based IC device emissions are passively recognized based on discriminating features extracted from their intrinsic physical properties similar to biometric human ID. A statistical analysis including mean, variance, skewness, and kurtosis of 500 separate fingerprints across 40 devices is shown in Figure 7. The analysis

13 technique works by comparing this coloration of RF emissions with a golden standard model and machine learning capability.

Figure 7. Statistical metrics of 500 Fingerprints for 40 individual devices [32]

1.2.4 Reliability Prediction

Traditional methods of reliability simulation are based on DC circuit analysis, which can result in overly pessimistic lifetime prediction. For long-term lifetime prediction, it is essential to have an efficient simulation methodology to estimate the degradation of various circuit parameters. Conventional reliability tools sample the circuit operation for a period and then extrapolate the degradation to the end of the lifetime [33][34]. Although they are generic for different types of operation patterns, these tools usually result in expensive computation and memory usage. Classic system-level Mean Time Between Failure

(MTBF) reliability calculations, such as those detailed in MIL-HDBK-217, are meaningless if one or more components in the system are counterfeit [35]. Recent advances in digital design prediction capability utilize available DC and AC degradation models to couple in circuit performance parameter. This shows promise in the ability to map and predict circuit performance degradation at any given point of lifetime, without 14 resorting to extrapolation, which can be off by orders of magnitude if the initial conditions have any error [36]. Analog Mixed-Signal (AMS) circuits need to have complementary performance metrics that are measurable and related to the component lifetime performance.

1.4 State of technology solutions

Counterfeit avoidance techniques are still in the infancy stages of research and pose unique challenges that must be addressed before deployment into the high-reliability systems.

Previous authentication work largely uses a digital approach [37], capturing variation as a binary threshold value. Uniqueness metrics (i.e. hamming distance) sum a large number of values to distinctly identify a structure, requiring significant logic area [38] and evaluation time to achieve the desired security independent of the intended chip function. While attractive for cryptography applications, these digital techniques are limited for authentication and reliability due to the inherent breakability of the digital challenge response behavior. Additionally, the potential to selectively alter the behavior of individual transistors demonstrates in-situ capability to flip the binary value of any gate and affect the integrity of the result [39]. Thus, IC trustworthiness cannot be determined by the operation of a neighboring security block. Instead, it must be established through examination of the functional IC.

Advanced techniques to identify or capture functionality in the chip through electromagnetic emissions and RF-DNA require library training or a golden standard.

Additionally, there exists a significant level of abstraction in the quantification of the

15 responses that removes any underlying detail about the authenticity of any of the constituent components.

The current state of the technology gives a scientific framework to attack the problem of counterfeits in AMS circuits. It is clear that the techniques discussed in this section do not provide a complete solution space, however, there are specific characteristics that are encouraging toward the assessment of ICs for authentication, unique ID, and reliability.

Individual chips possess variability to a degree by which they can be uniquely analyzed.

Electromagnetic signature qualities show feasibility of separating spectrum measurements for classification and remote capture of unique traits. RF-DNA measurements show passive dynamic measurement resolution down to the component level. These findings can be leveraged moving forward with new novel solutions. Techniques that allow for unique chip fingerprinting and authentication with predicative signature capability, based upon process and functional testing information, can potentially be exploited as a supply chain risk management technology.

To overcome current technological shortfalls and mitigate analog IC piracy, this thesis investigates a design approach that utilizes the correlation of circuit behaviors at the physical device and architectural level to 1) authenticate and group ICs of the same pedigree (process, lot, etc.) 2) provide traits for unique identification, and 3) enable reliability monitoring. A trade-study of mixed-signal circuit topologies was conducted to identify circuits that may have particular sensitivity to process variation across nodes.

Further, this research targeted design conception through fabrication, addressing counterfeit avoidance, and does not seek to address package marking.

16

Chapter 2. Authentication and Identification in AMS ICS

Analyzing information from performance traits and unique fingerprints can provide quick, accurate means of statistical correlation, especially in CMOS ICs, where process control and models are well understood and provided as a performance agreement by the foundry through a Process Design Kit (PDK) [40]. These unique behaviors can be used to identify and group circuits of the same pedigree and provide traits for reliability monitoring.

Furthermore, these “fingerprints” can be exploited for purposes of individual chip authentication.

2.1 Process Specific Function (PSF) Authentication

There exists a unique fingerprint behavior(s) for each building block in an integrated circuit system that is sensitive to random process variations to the degree by which it can be characterized. This behavior, termed Process Specific Functions (PSFs), shown conceptually in Figure 8, leverage the analog domain in the design of the blocks, the variation inherent to fabrication, and the challenge- response measurement of the PSF[41].

Using the full analog value of the intrinsic response reduces the number of gates required to provide authenticity by orders of magnitude over current approaches. AMS circuits exhibit unique spectral behavior based on design features and the inherent random differences in the manufacture process [42]. These parameters can be characterized and used as a basis for comparison of the integrated circuit.

17

Figure 8. Unique Process Specific Function block diagram general architecture

A PSF is a measureable response that is deterministic in behavior based on a provided stimulus (challenge). In Figure 8, each die at a different location on the wafer contains an identical design at the circuit level, A-B-C, but is different at the physical level based on random process variation, 1-2-3. This random process is characterized by capturing the mean(s), μ, variance(s), σ2, and moment(s) (i.e. skewness, ϒ, and kurtosis, k) coupled with a statistically relevant distribution [43][40]. The language used here allows the integrated circuit to be viewed as a challenge-response block that is familiar language in hardware security literature.

For authentication, a challenge is provided to the die, which is acted on by the circuit to produce an architected response. The response is parameterized such that under only the influence of process variation it falls within the statistical window. The window is defined

2 by means, μN , variances, σN , skewness, ϒN, and kurtosis, kN , where N is the parameter ordinal, that captures the extent of expected process variation. If foundry process parameters change or there are functional changes to the IC, the response, for a given 18 challenge, will fall outside of a statistical parameter window, i.e. circuit “3”, failing authentication. To increase probability of detecting (PoD) a tampered part, N is increased based on the sensitivity and specificity of the parameters [44].

2.2 PSF Unique Identification

An advantage of defining the parameter space in this way is that it can be exploited for special cases. One such case is for a completely random uniform distribution that provides

“key- like” identification of the same integrated circuit design. Figure 9 shows a conceptual diagram of a uniform distribution where unique responses are returned for each parameter in the measurement space across a varying challenge. The variables YN and ZN define the lower and upper boundary for the response measurement space along the x-axis. The probability density of the response, P, is plotted on the y-axis, with P=0 for a response less than Y or greater than Z, and P=1/(Z-Y) in between. This means that the IC being challenged has an equal chance of producing a response in the measurement space. When the challenge and response are designed to produce this type of behavior, each ICs individual identity can be determined.

Figure 9. Unique PSF Random Uniform distribution measurement space [Y, Z]

19

2.3 CMOS Technology Considerations

“In a 16 nm MOSFET, there are 53 silicon atoms and 3.5 Boron ions in the channel of the transistor. At this level, purely random variations in the number of atoms in the area—more or less independent of the accuracy of the lithography and etch processes—would result in variations in saturation drain current. No known process, short of atom-by-atom construction of the channel with an atomic-force probe, could prevent variations at this level.” ISSSC -2007, Hartman, ST Microelectronics

The concept of process specific functions directly lends itself to design techniques within

CMOS technology. PSF utility depends on mature, scalable, physics based models. The design space to craft a unique behavior relies on the principle that the mean behavior is under the control of the designer with the variance provided by random processes. This approach provides a reasonable framework to map both a design and process. Modern

CMOS technology process variation can be classified into three categories [45]; known systematic, known random, and unknown. For PSFs, systematic variation is reduced to the maximum extent possible, as it does not contribute to the overall uniqueness. Random device parameter fluctuations stem from non-uniformities in process parameters such as doping concentration densities, oxide thickness, and diffusion depths. Photolithography limitations also lead to random variations in the geometry of devices. The impact of random sources of variability increases with reduced device dimensions. For example, random dopant fluctuation (RDF) increases proportionally to the square root of the active

device area [46]. Consequently, the threshold variation 휎푉푡ℎ is inversely proportional,

휎푉푡ℎ ∝ 1/√푊푥퐿, where 푊 the is the transistor width and 퐿 is the transistor length. This comparison of threshold variation and channel dopants across process node is shown in

Figure 10. Threshold variation has a direct impact on the current and transconductance values in MOSFET devices, affecting every aspect of performance in operation.

20

Figure 10. Impact of RDF on threshold and channel dopants in MOSFETS [47]

Differences in devices from variation are found between transistors next to each other, called local or intra-die variation, or from die-to-die, called global or inter-die variation.

Distinct circuits are used for process characterization in an attempt to counter and reduce variation effects such as ring oscillators, memory (SRAM), transistor arrays, adders/arbiters, PLLs, replica paths, etc. Many of the sources of variability are not modeled in the design kits and may be treated as random during the design process [48]. Process characterization monitor circuits may be used to augment the fidelity of a PSF if necessary, and can be used to validate device models compared to the fabricated devices if the authentication cycle shows trust issues. An example of this use case is further discussed in Chapter 6 section 6.

2.4 AMS System Decomposition

AMS components are usually tailored to specific semiconductor process characteristics, requiring a unique design skill set and multiple design iterations to yield a successful product. Thus, once an AMS block is functional, it acquires high market value and may

21 be used as 3rd party IP across many systems, making it a popular target for attacks, since a successful exploit will affect many products [49]. In addition, it increases the value of IP piracy [3], raising the need for assured analog authentication. The complete space of AMS circuits is a wide subject and complete coverage of all potential applications was not the intention of this research. There is however, a consideration that PSF application should be targeted to critical components. As such, treatment of the components in the information paths of communications systems will provide considerable value in increasing trust. An obvious application space would be a transmitter chain, as is shown conceptually in Figure 11.

As defined, PSFs are a subset of the total physical phenomenology of the circuit, used for individual circuit identification that can potentially be cascaded for authentication of an entire component chain. This larger application of the methodology depends on the fidelity of the PSFs of the constituent components. Therefore, investigation must take place at the device and component levels to develop PSFs. In order to scope the work appropriately it became necessary to focus effort on one component in the chain.

22

Figure 11. Transmitter Chain (a) block diagram (b) component level waveform tracking [50]

Investigation into development of PSFs for any of the components in the chain would be a significant contribution to the field. There is however, an emphasis, through increased integration, shown in Figure 11 and Figure 12, to push transmit and receive chain to direct synthesis and direct conversion topologies respectively [51]. As such, on the transmit side, the digital to analog converter performance will continue to increase in expected functionality and overall performance relative to the other components, and thus became the focus of PSF application.

Digital RF Direct Digital Synthesis DAC Heterodyne Transmitter Digital RF I VCO1 BPF BPF I ω1 PA DAC 90°

ω2 DIGITAL Q BASEBAND Q

DIGITAL BASEBAND DIGITAL DAC VCO2

Figure 12. Increased transmit dependency on DAC

23

Chapter 3 Digital to Analog Converter PSF

PSFs are most useful when they exploit basic functions of the AMS blocks. Therefore, development of a PSF for a DAC begins with an investigation of its fundamental operation.

3.1 Basic DAC Operation

In case of an N-bit DAC, the input digital data can be described as having N binary input bits defined by a vector as,

퐵̂ = {푏푁−1, 푏푁−2, 푏푁−3, … . , 푏1, 푏0} (1)

Where 푏푖 ⋲ {0,1}, and 푏0 is the least significant bit (LSB), and 푏푁−1 is the most significant bit (MSB). The vector is then converted to a decimal value D by

푁−1 푖 퐷 = ∑ 2 푏푖 (2) 푖=0

푁 This value is then multiplied by a gain factor, such as 푉퐿푆퐵 (where 푉퐿푆퐵 = 푉퐹푆/2 ) for

푁 voltage or charge-based DACs and 퐼퐿푆퐵 (where 퐼퐿푆퐵 = 퐼퐹푆/2 ) for current-steering DACs, to yield the final analog voltage (or current):

푉푂푈푇(퐷) = 퐷 ∙ 푉퐿푆퐵 ; 퐼푂푈푇(퐷) = 퐷 ∙ 퐼퐿푆퐵 (3)

In an ideal sampling process the DAC’s inputs are fed with data impulse trains. However, finite switching time in real circuits requires input of square-wave pulses where data is held for a TCLK period, known as the zero-order hold (ZOH) [51][52].

24

Figure 13. Basic DAC Operation and output spectral content

The converter circuit assigns the analog weights corresponding to the digital input code, summing them to form a final discrete (stair-step) output. This basic operation and resultant frequency content of the DAC is shown in Figure 13 [50]. The instantaneous jumps in analog levels of the resultant stair-step waveform indicate that the signal comprises a wide bandwidth. The expanded frequency content of an ideal DAC operation is shown in Figure

14. The finite resolution of the DAC results in inherent quantization noise that ultimately sets the minimum noise floor.

Figure 14. Foundational ideal DAC Spectrum

25

A DAC’s output spectrum is divided into Nyquist zones defined at 푛푓퐶퐿퐾/2 where 푛 =

1,2,3, … The signal, image replicas, and distortion products are attenuated with a hold distortion sinc response. While there are several zero-order hold variations, the most often used is a non-return-to-zero (NRZ), where the DAC output is held for the entire duration of 푇퐶퐿퐾 [53]. Operation of the DAC is typically limited to the first Nyquist zone due to the rapid roll-off of the sinc shaping.

When presented with a digitized sinusoid input, a DAC generates an amplitude response at predictable frequencies based on the circuit architecture. Non-linearity is produced through amplitude and timing errors in the physical generation. Binary-coded structures generate tones at the fundamental and odd-order harmonics, due to the symmetric nature of the DAC waveform, while a unary structure generates asymmetric waveform and consequently tones at the fundamental, even, and odd harmonics. The amplitude of the harmonics relative to the carrier, as well as noise in the system, are dependent on the design choice of transistors, geometries, interconnects, applied voltages/currents, and selection of passives[54]. Figure 15 show the resultant spectrum including non-linear spurs [55].

Figure 15. Physical DAC spectrum 26

The sinc hold distortion requires synthesis of a low to moderate frequency relative to fclk to observe non-attenuated variability-based process specific characteristics in the first

Nyquist zone. This view of the DAC spectrum provides a number of opportunities to develop a PSF.

3.2 DAC Output Waveforms:

A theoretical relationship has been shown to exist between the digital code input challenge to the DAC and the resulting spectrum response. To generate a PSF model for the DAC structure, the process variation contribution to the harmonic content in the output waveform is considered. The harmonic content is calculated by summing the contribution of each bit in the DAC. Given an input signal, x(t), the total quantized output of the DAC, xQ(t), can be written as,

푁−1

( ) ( ) 푥푄 푡 = ∑ 푥푄푛 푥(푡) (4) 푛=0

( ) where xQ푛 t is the contribution of bit n for an N-bit binary DAC (n = 0 … N − 1). The full scale range (FSR) of the DAC is defined by a peak boundary (-A, A). Furthermore, for analysis, the DAC is assumed to have an FSR of 2, and therefore the input x(t) is bounded by a peak value of ±1. As described, the output of the DAC produces a quantized stair-step response, and as shown in [56][57], where each bit produces a pulse train output waveform defined by its architecture[58][59]. In order to ensure the resultant stair-step

( ) response, for a binary implementation, each bit’s square-wave transfer function, 푥푄푛 푥(푡) , is shown in Figure 16, normalized to FSR.

27

Figure 16. Bit Transfer function for 3-bit Binary DAC

Since the binary structures produce symmetric waveforms, i.e. square-waves across y-axis, each of the bit waveforms can be represented as a periodic pulse train function, as shown in Figure 17, where the periodic relationship T/T0=0.5, the amplitude of the function

N-n n+1-N X0=1/2 and the full periodicity of the function, T0=FSR*2 , are directly extracted from the bit transfer function [60].

Figure 17. Bit transfer function square-wave pulse train representation 28

Therefore, in taking a different approach than [61], where the spectral content is referenced in the frequency domain from mismatch error, the continuous time relationship of the input to DAC harmonic content is produced by the full treatment of the output waveform through

Fourier analysis, where the signal can be decomposed into its individual frequency components through [62],

( ) ∑∞ 푗푘ω0푥(푡) 푥푄푛 푥(푡) = 푘=−∞ 푐푘푒 (5)

Where the coefficients are defined as,

1 푇0 ( ) −푗푘ω0푡 푐푘 = ∫ 푥푄푛 푡 푒 푑푡 (6) 푇0 0

Therefore, by analyzing the shifted pulse to ensure symmetry, the coefficient becomes,

푇 2X0 2 −푗푘ω0푡 푐푘 = ∫ 푒 푑푡 (7) 푇0 푇 −2

Integrating provides,

푗푘π푇 푗푘π푇 2X 푇 푇 X − 0 −푗푘ω0 푗푘ω0 0 푇 푇 푐푘 = ( 푒 2 − 푒 2) = (푒 0 − 푒 0 ) (8) −푗푘ω0푇0 −푗푘π

By Euler, the coefficient can be rewritten as,

X0 푘π푇 2X0 푘π푇 2X0 푘푇ω0 푐푘 = 2푗푠푖푛 (− ) = 푠푖푛 ( ) = 푠푖푛 ( ) (9) −푗푘π 푇0 푘π 푇0 푘π 2

Using the property of sinc(x)=sin (πx)/ πx, results in,

푘푇ω 푘푇ω 푠푖푛 ( 0) 푠푖푛 ( 0) 2X0푘푇ω0 2 2X0푇 2 푐푘 = = (10) 2푘π 푘푇ω0 T0 푘푇ω0 2 2

29

Therefore, the 푐푘 coefficient simplifies to

2TX0 TkωO c푘 = sinc ( ) (11) T0 2

Imposing conditions to remove the DC offset and restore the shift,

2TX Tkω 퐹푆푅 푇 0 O −jkωO( − ) c푘 = sinc ( ) e 2 2 (12) T0 2

1 Substituting ωO = 2휋 T0

T 2휋 T 2휋퐹푆푅 2휋 T −j( 2푇 푘− 2 ∙T 푘) c푘 = 2X0 sinc ( ∙ 푘) e 0 0 (13) T0 2 T0

Substituting values for period relationship,

휋푘 푁−푛−1 휋 c = X sinc ( ) e−jk(휋2 −2) (14) 푘 0 2

sin 푥 Since sinc 푥 = 푥

1 2X0 휋푘 푁−푛−1 c = sin ( ) e−jkπ(2 −2) (15) 푘 휋푘 2

Where,

0 푓표푟 푒푣푒푛 푘 1 c푘 = {2X0 휋푘 푁−푛−1 sin ( ) e−jkπ(2 −2) (16) 휋푘 2

Therefore, the expression can be written in terms of odd values,

푁−푛−1 1 2X0 휋(2푘−1) −j(2k−1)π(2 − ) c = sin ( ) e 2 푓표푟 푘 ≥ 1 (17) 푘 휋(2푘−1) 2

Rewritten as,

1 2X0 휋 ( ) 푁−푛−1 c = sin (휋푘 − ) e−j 2k−1 π(2 −2) (18) 푘 휋(2푘 − 1) 2

30

휋 Since sin (휋푘 − ) = − cos(휋푘) = −(−1)푘 for integer k, 2

1 2X0 푁−푛−1( ) c = −(−1)푘 e−jπ(2 2푘−1 −푘+2) (19) 푘 휋(2푘 − 1)

jπ − 1 Substituting by Euler shows e 2 = = −푗, 푗

2X 푁−푛−1 c = (−1)푘푗 0 e−jπ(2 (2푘−1)−푘) (20) 푘 휋(2푘 − 1)

Since ejπ = −1, replacing ejkπ = (−1)푘, gives

2X 푁−푛−1 2X 푁−푛−1 c = (−1)푘(−1)푘푗 0 e−jπ(2 (2푘−1)) = 푗 0 e−jπ2 (2푘−1) (21) 푘 휋(2푘 − 1) 휋(2푘 − 1)

Substituting X0,

2 1 푁−푛−1 2 1 푁−푛 푁−푛−1 c = 푗 e−jπ2 (2푘−1) = 푗 e−j(π푘2 −휋2 ) (22) 푘 2푁−푛 휋(2푘 − 1) 2푁−푛 휋(2푘 − 1)

Therefore,

2 1 푁−푛−1 c = 푗 ej휋2 푓표푟 푘 ≥ 1 (23) 푘 2푁−푛 휋(2푘 − 1)

( ) Therefore xQn t can be written by inserting c푘 as,

∞ 2 1 푁−푛−1 x (t) = ∑ 푗 ej휋2 ej((2푘−1)휔푂푥(푡)) (24) Qn 2푁−푛 휋(2푘 − 1) k=−∞

Rewriting in sine form,

∞ −4 푁−푛−1 1 x (t) = ej휋2 ∑ sin((2푘 − 1)휔 푥(푡)) (25) Qn 휋2푁−푛 (2푘 − 1) 푂 k=1

Substituting 휔푂,

∞ −4 1 2휋 ( ) j휋2푁−푛−1 ( ) ( ) xQn t = 푁−푛 e ∑ sin ( 2푘 − 1 푥 푡 ) (26) 휋2 (2푘 − 1) 푇0 k=1 31

Inserting value for 푇0,

∞ −4 1 2푁−푛 ( ) j휋2푁−푛−1 ( ) ( ) xQn t = 푁−푛 e ∑ sin ( 2푘 − 1 휋푥 푡 ) (27) 휋2 (2푘 − 1) 퐴퐵 k=1

For predictive, deterministic analysis on the spectrum, a pure sinusoid analog input

푥(푡) = 푠푖푛(휔표푡) is selected as the challenge waveform. The resulting extraction process of challenge to quantized bit waveforms response is shown in Figure 18 [50][60].

Figure 18. Quantized bit waveform response extraction with x(t)=sin(wot)

Therefore, the continuous time quantized bit waveform is expressed as,

∞ −4 1 2푁−푛 ( ) j휋2푁−푛−1 ( ) xQn t = 푁−푛 e ∑ sin ( 2푘 − 1 휋 sin(휔푡)) (28) 휋2 (2푘 − 1) 퐴퐵 k=1

∞ Using Jacobi-Anger expansion, where sin(푧 sin(휔푡)) = 2 ∑푞=1 퐽2푞−1(푧) sin((2푞 − 1) 휔푡), the expression can be written as,

∞ −8 1 휋 x (t) = ∑ sin (휋2푁−푛−1 + ) Qn 휋2푁−푛 (2푘 − 1) 2 k=1

∞ 2N−n ∙ ∑ J2q−1 [(2k − 1) π] sin[(2q − 1)ωt] (29) 2A퐵 q=1

32

Therefore, in terms of the harmonics, the waveform is written as,

( ) ( ) [( ) ] xQn t = ∑ Aodd q, n sin 2q − 1 ωt for odd q (30) q=1

Where,

∞ −8 π 1 2N−n A (q, n) = sin (π2N−n−1 + ) ∙ ∑ J [(2k − 1) π] (31) odd π2N−n 2 2k − 1 2q−1 퐹푆푅 k=1 is the bitwise spectral contribution to the harmonic and Jn(z) is a Bessel function of the first kind. Therefore, in a binary implementation, distinct switching bit waveforms can be observed for an N-bit DAC. The weighted, normalized, time dependent bit waveforms,

, xQn of six bits, noted 푑1 − 푑6, with peak amplitude A=1, each composed of 1000 harmonics, are shown in Figure 19[50].

0.6 d1 d2 d3 0.4 d4 d5 d6 0.2

0

-0.2

-0.4

Weighted Weighted MSBWaveforms -0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Time Normalized to 1

Figure 19. DAC bit waveforms 33

3.3 Output waveform amplitude decomposition

The total power contribution for the first eleven harmonics is shown relative to power in x(t), in Figure 20. Inset A gives a visual representation of how each of the harmonics combine, through the previously extracted Fourier theory, to form unique bit waveforms that combine to form the resultant stair-step response. Examination of the harmonic response reveals that an implementation isolating the top N=3 MSBs of the DAC (circled area) for analysis purposes, regardless of the total resolution, provides the richest feature set of amplitudes of any binary resolution. This provides the most distinct range of harmonic amplitude values as well as the greatest delta or difference value between subsequent harmonics.

Figure 20. Harmonics vs. resolution for binary DAC 34

Figure 21. Harmonic Amplitude contribution by MSB

The contributions of each of the top 3 MSBs is further extracted from (31) to show a deterministic contribution to each harmonic power by MSB in Figure 21 and broken down by each MSB to harmonic in Figure 22.

Figure 22. MSB contribution to each harmonic

35

This relationship provides a unique mapping of the DAC architecture to the parameter space for PSF construction. For example, MSB2 produces a large negative contribution to the 5th harmonic while MSB3 exhibits a small positive contribution. These functions become process specific in the presence of manufacturing variation.

3.4 Finite sampling resolution

Treatment of the waveforms and subsequent harmonics thus far has assumed that each transition of the bit waveforms is captured and contributes to the final output. In practice there is a finite clock rate that can be executed on a given architecture, based on the unity gain of the device technology. The sampling or clocking of the input must first be considered. The output of the DAC is represented as a series of rectangular pulses with a width equal to the reciprocal of the clock rate. Figure 23 shows that there are potential issues with simply using Nyquist theory to clock the signal data. The grey curves represent finite resolution in the DAC output waveform, whereas the red curve will have a courser response.

Figure 23. Finite resolution in sampling waveforms

36

Therefore, in order to produce the absolute best, finite resolution of the DAC, the pulse width of each of the individual waveforms must be considered in order to extract the necessary clock rate [50]. The relationship for clock frequency versus DAC resolution is shown in Figure 24. From the plot it is noted that there is an exponential behavior associated with increasing resolution and finite sampling. This sets the first boundary condition for DAC operation, where the minimum clock frequency for the DAC data can be established based on DAC resolution and reconstruction frequency. The experimental data has been theoretically modeled for additional LSBs, and extended to cover up to 12 bits of resolution in Figure 25. Alternatively, the highest reconstruction frequency can be backed out from the available data clock.

Figure 24. Sampling criteria for N-bit DAC based on waveform pulse width, 6 MSB

37

Figure 25. Log sampling criteria for N-bit DAC based on waveform pulse width

This fundamental to sampling ratio provides the number of harmonics available for measurement without aliasing in the 0 to fs/2 regime. The condition of finite precision sampling must hold if the parameter prediction equations are to hold true, and therefore

Nyquist sampling criteria is not sufficient for the intended application space. However, these waveforms, based solely on the varying clock, will provide different harmonic contributions to each amplitude that provides a unique deterministic PSF response space.

38

Chapter 4. DAC Implementation Model

Among the different DAC topologies, a current-steering architecture satisfies many high- speed and high-resolution applications [63][64]. For PSF development the intention is not to push state-of-the-art in DAC performance; however, it is important that techniques developed address the most advanced designs for potential insertion.

4.1 DAC Architecture

A straightforward implementation of a current steering DAC is an array of binary-weighted current sources. This means that each relative source from the least significant bit (LSB) to the most significant bit (MSB) is a power of 2 larger than the previous source in the amount of current it supplies, with the smallest value being the current in the least significant bit of the design, designated as ILSB [52]. A device-level schematic of an N-bit current steering DAC is depicted in Figure 26.

Figure 26. Binary Current Steering DAC 39

The binary-weighted current source array supplies the tail bias to the differential switch pairs that commute currents across the output load resistance RL to form the final DAC output voltage waveform. Based on the input digital code, 푏i{0,1}, the corresponding current-commutating switch-pair steers the direction of the current into one of the differential arms of the DAC output. Two load resistors, typically 50 ohms each, are used to convert the differential DAC output current to a voltage signal. Assuming an LSB

푡ℎ current of 퐼퐿푆퐵, and noting 푏푖 as the 푖 binary bit of the digital code, the output current of an 푁 − 푏푖푡 binary weighted DAC can be written as,

푁−1 푖 퐼푂푈푇 = 퐼퐿푆퐵 ∑ 2 푏푖 (32) 푖=0

Depending on bi, current is steered to and from the output combining load resistor (푅퐿) providing the DC output voltage. This provides the amplitude driven waveform by,

푁−1 푖 푉푂푈푇 = 푅퐿 ∙ 퐼퐿푆퐵 ∑ 2 푏푖 (33) 푖=0

Through use of the basic current steering cell architecture for generation of the characteristics for the PSF architecture, the footprint of implementation is reduced by 2-

3X from current SoA techniques based on SRAM memory architecture [38]. This is accomplished through the three device DAC bit cell vice a 6 or 9 transistor SRAM cell.

40

4.2 Variation Effects

To model the variation, 퐴표푑푑(푞, 푛), is calculated and re-introduced as a distributed random variable1. As shown in the previous analysis, variation of the amplitude of each bit waveform is equivalent to a variation in the current source value. A fixed static variation is placed on each of the bit waveforms to show an amplitude response change in the time domain waveforms.

For illustration purposes, large Gaussian normalized variation, σ/µ=20%-40%, is introduced to each bit, shown in Figure 27. Ripple seen in the waveforms is due to the limited number of summation terms, which is the total number of harmonic contributions2.

The resulting harmonic contribution from each MSB for a 3 bit DAC is shown in Figure

28, relative to the fundamental.

This is the key feature of the PSF implementation for this architecture. It yields a non- linear relationship to the statistical variance of each of the harmonics based on the challenge input and the variation in each bit. The overall contribution of the variation on the individual bit is analyzed with respect to the probability distribution function of the resulting amplitude of each harmonic.

1 The work in this thesis has restricted analysis to Gaussian random variable with mean (휇) and variance (휎2) 2 Simulation was done to include the first 1000 harmonics 41

Figure 27. Bit waveforms with variation

Figure 28. 3-Bit DAC harmonic variation effects total contribution across MSBs

42

Figure 29. Mathematical operation of the sum of normal distributions

Each variation is modeled as a normal distribution, and the math operation is shown visually in Figure 29.

Figure 30. Probability distribution of normalized amplitude with MSB 1 variation

43

Inducing the variation strictly on the MSB, shown in Figure 30, exhibits a different amplitude variance for each harmonic. It is clearly shown that the largest change in the distribution of the harmonic amplitude is in the fundamental frequency. This is an expected result when comparing to the harmonic contribution of the MSB, as the MSB contributes more than 3 times more to the fundamental than any of the other bits. Furthermore, each additional bit provides unique harmonic distributions. Deviation in bit contributions to the

DAC output provides a multi-attribute diversity in response that supports the identification of the component as being counterfeit or having malicious modification.

4.2 DAC Design Considerations

In order to develop an authentication response, the individual bit contributions are allowed to change with parametric physical changes that occur when geometry and process variations are introduced. A smaller bias generating device will yield more variation in design, e.g. threshold voltage, and hence have a greater effect on the DACs performance characteristics. These variations include the length and width of the integrated circuit devices, their mobility, and/or the thickness of the gate dielectric [40]. Figure 31 shows the parametric changes that occur when geometry and process variations are introduced to a 3-bit current steering DAC design. Each tail current contribution, I, will change by a ΔI with a standard deviation, σ(ΔI/I), for each current source, thereby changing the amplitude of the respective bit. To find the difference in current between a matched pair of devices or comparatively in a change to a relative device 훥퐼, the variation in the threshold voltage 퐼 as well as mobility is examined with respect to device geometry[65].

44

Figure 31. 3- Bit Binary Current Steering DAC Architecture with variation effects

As discussed in Chapter 2, widely accepted standard deviation models for threshold, 푉푡, and transconductance, 퐾, are inversely proportional to device area, 푊 × 퐿푝. When the devices are in close proximity, the expressions (34) and (35) can be written [66].

푀푣푡푎 훿(훥푉푡) ∝ 푛 (34) (푊 × 퐿푝)

훥퐾 푀 훿 ( ) ∝ 푘푎 (35) 퐾 푛 (푊 × 퐿푝)

Where,

푀푣푡푎 = 푚푎푡푐ℎ푖푛푔 푐표푒푓푓푖푐푖푒푛푡 ∝ 푡표 푡표푥 푎푛푑 푑표푝푖푛푔 푎푡표푚푠 푖푛 푑푒푝푙푒푡푖표푛 푙푎푦푒푟

푊 = 푑푒푣푖푐푒 푤푖푑푡ℎ

퐿푝 = 푒푓푓푒푐푡푖푣푒 푑푒푣푖푐푒 푐ℎ푎푛푛푒푙 푙푒푛푔푡ℎ, 퐿푑푟푎푤푛 − 훥퐿푛표푚푖푛푎푙

푛 = 푝표푤푒푟 표푓 푎푟푒푎 푑푒푝푒푛푑푒푛푐푦

푀푘푎 = 푚푎푡푐ℎ푖푛푔 푐표푒푓푓푖푐푖푒푛푡 45

The standard deviation of the distribution of the extracted threshold voltage and transconductance is implemented in the compact device, and the current relationship is expressed in (36).

훥퐼 훥퐾 2 [100 × 휎훥푉 ]2 √ 푡 = [휎 ( )] + 4 × 2 (36) 퐼 퐾 [푉퐺푆 − 푉푡]

Process variation is the tool by which PSFs become possible, however it is important to consider that the circuit is being designed for functionality as well. Therefore, high fidelity, physics based compact model equations, and monte carlo simulations are used to determine overdrive voltage and geometry trade-offs for induced variation to meet characterization specifications. However, non-optimum device sizes and power performance may be a result of the development of PSFs.

46

Chapter 5. DAC Authentication and Unique ID Framework

Chapter 2 Established mechanisms to determine the PoD for component authenticity by utilizing a generated parameter space with sensitivity and specificity to process variation.

Chapters 3 and 4 applied these mechanisms to DACs resulting in the parameter space across the harmonic spectrum defined by process specific variation shown in Figure 32.

Each of the absolute values of the harmonic amplitude is shown with a probability distribution that has a statistical mean and standard deviation for each peak amplitude of the harmonic, defined by the DAC architecture.

Figure 32. DAC spectrum parameter illustrated defined across variation

47

The model generated from the design characteristics and the parameters provided in the process design kit become a hypothesis set. Any other component whose parameters are compared against the hypothesis become the test set. To account for both measurement uncertainty and deviation from expected process behavior in the PDK model, the PSF for the process is derived from a relative comparison of a set of parameters for each device.

5.1 PSF Authentication hypothesis testing

The amplitude (power) of the harmonics in the parameter space can be treated as individual characteristics based on the design of the DAC and the challenge. The modeled characteristic space can be generated from monte-carlo simulations of the final DAC design in conjunction with the theoretical harmonic model in (30)-(31). This gives an expectation mapping for the parameters, which is one of the novel concepts of PSFs. This will allow for hypothesis testing of the resulting hardware measurement to determine a number of potential conclusions including, but not limited to, whether or not changes have been made to the design, process specification have been altered to the point where they affect performance and potentially reliability, or a classification decision on which foundry a chip has been fabricated. Any particular classification definition is possible once a statistical framework has been developed for analysis.

The concept for PSF authentication is to determine whether or not the measured parameters are what are expected when the integrated circuit is returned from the foundry. Therefore, the measured parameter space can statistically be compared to the hypothesis model case.

In order to do so, the characteristics of the parameters must be put into a statistical context.

For each amplitude randomly sampled to create a vector, 푿퐻푖 = (푋1, … . 푋푛), of the

48 harmonics being measured, Hi, statistical features are determined based on the number of samples taken 푁푥, with sample mean,

푁푥 1 푥̅ = ∑ 푥푛 (37) 푁푥 푛=1 and sample variance,

푁푥 2 1 2 휎̂ = ∑(푥푛 − 푥̅) (38) 푁푥 푛=1

Therefore, the statistical feature vector F of the harmonic space 퐹퐻𝑖 is,

̅ 2 { } 퐹퐻𝑖 = [푥퐻𝑖 , 휎̂퐻𝑖 ] , 푖 ∈ 1,2, … . 푁퐻 (39)

These feature vectors are then be combined to form a random vector for the characteristic as,

푭 = [퐹퐻 | 퐹퐻 | … 퐹퐻 ] (40) 1 2 푁퐻

In the PSFs concept, it was highlighted that each of the harmonics could potentially have a different statistical shape based variation and dependence of the current sources. Therefore, the goal here for authenticity would not be to create one large multivariate distribution for comparison but rather compare each feature vector against its expected behavior from the simulation model. A valid statistical method is to do a hypothesis test, and more specifically a Generalized Likelihood ratio test can be used [67]. A General Likelihood ratio test can be used to test one hypothesis over another within a specific confidence interval, and the assumption is made that there is an unknown mean and variance for any suspect data set. Stating that a null hypothesis, 퐻표 specifies that V, some vector, lies in a particular set of possible values, given Q0, i.e. 퐻표: 푉 ∈ 푄0; an alternate or suspect hypothesis, 퐻푆 defines that 푄 is in another set of possible or suspect values 푄푆, and does not overlap 푄0, i.e. 퐻푆: 푉 ∈ 푄푆. Since the concern is whether or not the distributions overlap each 49 other in the parameter space, meaning that a test could possibly give a positive value for a set of data that is not of the family of authentic pedigree, it is noted that, 푄 = 푄0 ∪ 푄푆, and either or both of the hypothesis 퐻표 and 퐻푆 can be compositional, meaning that they can constitute ratios or relative probabilities [68].

̂ For analysis, 퐿(푄0) is defined as the maximum of the likelihood function for all 푉 ∈ 푄0. That is,

̂ ̂ 퐿(푄0) = 푚푎푥푉∈푄0 퐿(푉). 퐿(푄0) provides the best conclusion for the measured data for all 푉 ∈ 푄0.

Likewise, 퐿(푄̂) = 푚푎푥푉∈푄퐿(푉) provides the best conclusion for the measured a data for all 푉 ∈

̂ ̂ 푄 = 푄0 ∪ 푄푆. If 퐿(푄o) = 퐿(푄), then a best conclusion for the measured data can be found inside

푄0 , meaning that the null hypothesis should not be rejected. Conversely, if 퐿(푄̂o) < 퐿(푄̂), then the best conclusion for the measured data could be found inside 푄푆, and the rejection of the null hypothesis should be considered in favor of selecting the suspect hypothesis. A likelihood ratio statistic, 휆, can be written as,

퐿(푄̂ ) 푚푎푥푉∈푄 퐿(푉) 퐿푖푘푒푙푖ℎ표표푑 푟푎푡푖표 푠푡푎푡푖푠푡푖푐, 휆 = o = 0 (41) 퐿(푄̂) 푚푎푥푉∈푄퐿(푉)

Therefore, the test statistic can be used as a likelihood ratio test of the null hypothesis versus the suspect hypothesis. As with any statistical test probability, a threshold needs to be defined to baseline an acceptable range. Thus, 푘 is defined as a the threshold parameter for a specific confidence interval of interest, and a rejection is determined by 휆 ≤ 푘.

As a statistic test vector implies, it is clear that 0 ≤ 휆 ≤ 1 needs to be the initial boundary condition.

If the value of 휆 is close to zero, it means that the likelihood of the sample is much smaller under the null hypothesis than it is under the alternate hypothesis, and the test suggests choosing 퐻푆 over

퐻표. A value of 푘 is chosen in order to achieve an acceptable probability of rejecting the null hypothesis when the null hypothesis is true, defined as the test 훼. This type of test is available to test any type of metric for the feature vector vs. the modeled statistics. 50

For the case of the parameters that have been identified for PSFs, the raw data can be used from one of the feature vectors 푿퐻푖 = (푋퐻1, … . 푋퐻푛), where the amplitude of the harmonics is measured over n ICs. To test the null hypothesis 퐻표: µ = µ표 versus a suspect hypothesis, 퐻푆: µ ≠ µ표, where µ표 is the expected mean of the fabricated IC based on the model. To create a more powerful test, assumption is made that that mean, µ, and variance, 휎2, of the measured data are not known. However, it is assumed that the sample set the data was taken from is that of a normal distribution. This test is set up to formally test whether or not to reject the established null hypothesis. This test aligns very well with the goals of authenticity in that it can be determined, within a confidence interval, if the measured integrated circuit is of the expected pedigree. This same analysis could be done to test the variance of the data set if it was a more effective discrimination parameter.

2 2 2 In this example case, 푉 = (µ, 휎 ). Therefore, Q0 is the set as {(µ0, 휎 ): 휎 > 0} and Q푆 =

2 2 2 2 {(µ, 휎 ): µ ≠ µ표, 휎 > 0} , and then Q = Q0 ∪ QS = {(µ, 휎 ): −∞ < µ < ∞, 휎 > 0}, with

2 휎 left as an unspecified, unknown quantity. The work can now be done to find 퐿(푄̂o) and

퐿(푄̂). Assuming a normal distribution, the relationship becomes

푛 푛 1 (푋 − µ)2 퐿(푉) = 퐿(µ, 휎2) = ( ) exp [− ∑ 퐻푖 ] (42) 2휎2 √2휋휎 푖=1

In order to now find 퐿(푄̂o) µ is restricted to Q0 , which means that µ = µ표. 퐿(푄̂o) can now be determined through solving for the value of 휎2 that maximizes 퐿(µ, 휎2). This is determined

2 as 휎̂0 , and is

푛 1 휎̂2 = ∑(푋 − µ)2 (43) 0 푛 퐻푖 푖=1 51

2 2 2 퐿(푄̂o) is found by replacing µ = µ표 and 휎 = 휎̂ in 퐿(µ, 휎 ). Thus,

푛 푛 2 푛 푛 1 (푋퐻푖 − µ표) 1 − 퐿(푄̂o) = ( ) exp [− ∑ ] = ( ) 푒 2 (44) ̂ 2 ̂ √2휋휎0 푖=1 2휎̂0 √2휋휎0

To find 퐿(푄̂) let (µ̂, 휎̂2) be the point in Q which maximizes 퐿(µ, 휎2), thus

푛 1 µ̂ = 푋̅ 푎푛푑 휎̂2 = ∑(푋 − µ̂)2 (45) 푛 퐻푖 푖=1

Giving the result,

푛 푛 푛 2 푛 1 (푋퐻푖 − µ̂) 1 퐿(푄̂) = ( ) exp [− ∑ ] = ( ) 푒−2 (46) ̂ 2 ̂ √2휋휎 푖=1 2휎̂ √2휋휎

Therefore, the likelihood ratio is calculated as,

푛 1 푛 푛 푛 −2 ̂ ( ) 푒 2 2 푛 ̅ 2 2 퐿(푄o) √2휋휎̂0 휎̂ ∑푖=1(푋퐻푖 − 푋) 휆 = = 푛 푛 = ( 2) = ( 2) (47) 퐿(푄̂) 1 − 휎̂ 푛 ( ) 푒 2 0 ∑푖=1(푋퐻푖 − µ표) √2휋휎̂

It is noted that 0 < 휆 ≤ 1 because Q0 ⊂ Q, thus when 휆 ≤ 푘 the null hypothesis would be rejected, where 푘 < 1 is a constant. Since,

푛 푛 푛 2 2 2 2 2 ∑(푋퐻푖 − µ표) = ∑[(푋퐻푖 − 푋̅) + (푋̅ − µ표)] = ∑(푋퐻푖 − 푋̅) + 푛(푋̅ − µ표) (48) 푖=1 푖=1 푖=1

The rejection region 휆 ≤ 푘 is written and further defined in (49)-(51),

푛 ̅ 2 2 ∑푖=1(푋퐻푖 − 푋) 푛 푛 2 ≤ 푘 (49) ∑푖=1(푋퐻푖 − µ표)

Thus,

푛 ̅ 2 2 ∑푖=1(푋퐻푖 − 푋) ≤ 푘푛 (50) 푛 ̅ 2 ̅ 2 ∑푖=1(푋퐻푖 − 푋) + 푛(푋 − µ표)

52

And, 1 2 ≤ 푘푛 (51) 푛(푋̅ − µ )2 1 + 표 푛 ̅ 2 ∑푖=1(푋퐻푖 − 푋) is an equivalent inequality,

̅ 2 2 푛(푋 − µ표) − ≥ (푛 − 1) (푘 푛 − 1) = 푘′ (52) 1 ∑푛 (푋 − 푋̅)2 푛 − 1 푖=1 퐻푖

Therefore,

푛(푋̅ − µ )2 표 ≥ 푘′ (53) 1 ∑푛 (푋 − 푋̅)2 푛 − 1 푖=1 퐻푖

By defining the unbiased population variance estimation as,

푛 1 푆2 = ∑(푋 − 푋̅)2 (54) 푛 − 1 퐻푖 푖=1

The rejection region becomes,

(푋̅ − µ )2 표 ≥ 푘′ (55) 푆2/푛

Therefore, (푋̅ − µ ) | 표 | ≥ 푘′′ (56) 푆/√푛

(푋̅−µ ) It can be seen that 표 is the t-statistic with a t test decision rule for k and n sample size, 푆/√푛 in this case degrees of freedom, as shown in Figure 33 [69] [70].

53

Figure 33. Distribution for t-test, two-tail probability statistics

This analysis clearly defines the boundary conditions as a straightforward threshold test for first pass authentication. A t distribution with n-1 degrees of freedom will be used to choose k’’. Relating to Figure 33, for a two-tailed t test, this requires that,

′′ 푘 = 푡훼/2,푛−1 (57) for the sample driven α t-test, where the probability of committing a Type I error is α. In terms of detecting a suspect component, this means that the confidence interval is 1-α for correctly identifying a component as authentic when it is in fact a good part. The statistics for confidence are dependent on the sample size and confidence bounds, shown in Table 4 for a two-tailed t-test.

t-values α 20% 10% 5% 1% Degrees of Freedom (n-1) 1-α 80% 90% 95% 99% 5 1.48 2.02 2.57 4.03 15 1.34 1.75 2.13 2.95 25 1.32 1.71 2.06 2.79 35 1.31 1.69 2.03 2.72 50 1.30 1.68 2.01 2.68 100 1.29 1.66 1.98 2.63

Table 4. Critical t-values for sample size and confidence for two-sided t-test [69] 54

As a demonstration example, a measured sample size of n=16 suspect components needs to be evaluated for authenticity. Likewise, the vendor has established a requirement, in order to reduce cost of discarded stock, that the confidence of incorrectly identifying a good component as a counterfeit is less than or equal to 5%. Alternatively, this means that there is a 95% confidence in correctly identifying good parts based on the architected test.

To accomplish this evaluation, a critical value of k’’ is selected from the table as 푘′′ =

푡0.05/2,15 = 2.13, and the resulting t-test analysis is shown in Figure 34.

A comparable method to authenticate the DAC output, based on distributions of the harmonic values, is to use a statistical test for each parameter (harmonic in this example) and assign a binary pass or fail value as to whether or not the data is within the expected range. A probability distribution of the resultant value would provide a clear view of how many devices passed the criteria. Additionally, if there were certain harmonic features that did not match with the hypothesis, they would be highlighted through the binary value.

Figure 34. Analysis of n=16 suspect samples with ≤ 5% Type I error

55

The measurement space can be represented as a set of expected frequency (F) model distributions as,

2 { } 퐹퐻𝑖 = [휇푖, 휎푖 ], 푖 ∈ 1,2, … . 푁퐻 (58)

This is represented visually as an increasing number of parameters across the measurement space in Figure 35.

o o2 o

PDF 1 N

µ1 µ2 µN X

Figure 35. Feature vector distribution space

Thus, the measurement vector overlaid becomes,

푿퐻 = (푋1, … . 푋푁) 푓표푟 ℎ푎푟푚표푛푖푐 푠푝푎푐푒 퐻 ∈ {1,2, … . 푁} (59)

Using the normal distribution test, where the mean and variance of the distribution is known, a determination can be made whether or not the measured data falls within a confidence interval. This method is the preferred route when there is a significant sample of parts from the same family to test against the expected modeled behavior, i.e. 4 < n <

30, but will change to a z-score when only 1 individual device is available for measurement.

This scenario is shown in Figure 36, where each parameter distribution is overlaid with the notional individual harmonic measurement.

56

X1 X2

XN o o2 o

PDF 1 N

µ1 µ2 µN X

Figure 36. Measurement vector overlaid with expected distribution definition

In this scenario, authenticity is determined by the z score calculation,

푋 − µ 푍 = 푁 푁표 (60) 휎 with its statistical significance value overlaid in Figure 37. Each z-score provides a different location on the distribution confidence boundary. Therefore, a maximum deviation z-score can be set on whether or not to determine a component is authentic or suspect. For example, a ±3σ range, i.e z-score of ±3.0, provides a 99.7% coverage of the statistical window.

Figure 37. Z-Score two-tail distribution confidence

57

This means that there is a very small chance that a part outside of the range will be accepted if the part truly represents the expected behavior, and should fail the part if it does fall outside. The tracking test becomes,

푋푁 − 휇 | 푁,푂| ≤ 3.0 푃푎푠푠 푏 = 1 휎 푩, 푃푎푠푠 표푟 퐹푎푖푙 푇푒푠푡 푁,푂 (61) 푋푁 − 휇푁,푂 | | > 3.0 퐹푎푖푙 푏 = 0 { 휎푁,푂

For example, by measuring the pass/fail response from individual measurements from a feature vector with 6 parameters;

A pass of all the parameters would yield a value of 111111= 63 A failure of the 1st parameter would provide the value 011111= 31 A failure of the 3rd parameter would provide the value, 110111= 55 A failure of the 1st and 3rd parameter would provide the value, 010111= 23

Because the condition is captured in a binary pass fail, a unique number for each failure criteria can be generated to determine the parameter cause and therefore potentially the root cause. The binary response can be plotted with respect to a threshold to determine whether or not a set of ICs are authentic, as shown in

Figure 38.

Figure 38. Acceptance and failure values based on harmonic test and threshold

58

5.2 Authenticity – Probability of Detection (PoD)

Through the individual parameter, and individual device hypothesis testing, first pass authenticity is established. From a design perspective it is important to understand to what degree a specific implementation of a PSF provides an adequate PoD such that, to the maximum amount possible, a tampered part does not pass the challenge-response scenario described in Chapter 2. In statistical terms, this is considered a Type II error, such that the test would accept a suspect part as being part of the authentic family, or the null hypothesis in the analysis presented in the previous section.

5.2.1 ROC Curve Analysis

Through the likelihood ratio testing, a confidence interval was determined in order to minimize identifying an authentic part as suspect. This would be sufficient if the impact of accepting a suspect part was low, but that cannot be determined with certainty.

Therefore, it is critical to maximally increase the probability of detection of the null hypothesis test. This is achieved by restricting the boundary conditions of the harmonic parameters statistical test to a decision space for a given design using Receiver Operating

Characteristic (ROC) curves [71].

ROCs are used to generate a relationship between identifying authentic parts (Sensitivity) and rejecting tampered parts (Specificity). For PSFs in the harmonic space, a parameter PO

2 is represented as the positive distribution with mean µO and variance σO , while YN is the

2 distribution made up of parts of unknown authenticity, with mean µN and variance σN .

59

The ROC curve is therefore summarized as,

휇 − 휇 휎 ROC(푡) = 훷 [ 푁 푂 + 푂 훷−1(푡)] , 0 ≤ 푡 ≤ 1 (62) 휎푁 휎푁

Where,

푥 1 1 2 훷(푥) = ∫ 푒푥푝−(2)푧 푑푧 (63) √2휋 −∞

is the standard normal distribution function [39][57]. Through this analysis, the probability of detecting individual suspect devices is accomplished through measurement over a set of statistical parameter tests. Figure 39 shows the overlay of two different sets of PSFs.

60

Figure 39. Normalized PSF Spectrum Parameter Space

61

The confidence interval, t, is spanned from 0 to ±3σ (i.e. normalized 0% to 99.7%) to empirically derive the PoD versus false alarm by calculating the area under the curve for each position: True Positive, True Negative, False Positive, and False Negative.

Using only an individual parameter to compare between an authentic and a suspect component can yield a low PoD due to a large false positive region, such as with the fundamental, (P||Y)F. Instead, relative parameter relationships in the response space are used to create comparison PSFs increasing PoD while removing any measurement error contribution.

5.3 Model Comparison: Authentic vs. Suspect

To demonstrate detection improvement, authentic and suspect data are generated by assigning variances and inserting them into a statistical model generated from (62) and

(63). A unique design space is initially chosen for an authentic component by assigning a

2 2 2 variance for each normalized MSB, σ MSB1= 2.25, σ MSB2= 1.5, σ MSB3=LSB= 0.25.

A suspect statistical model representative of process retargeting or cloning (e.g. 90nm to

65nm) is created where the device geometries are scaled into a smaller process with excess capacity, resulting in an inverse variance change due to process mismatch [72]. Therefore,

2 2 2 each normalized suspect MSB variance is, σ ,MSB1= 0.25, σ ,MSB2= 1.5, σ MSB3=LSB= 2.25.

The resultant parameter space for the Fundamental through the 11th harmonic is shown in

Figure 40.

62

Figure 40. Authentication Harmonic Parameter Comparison Space

Comparing the resultant fundamental parameter spaces, (P||Y)F, as would be done during a normal functional test, produces an effective low probability of detecting the suspect tampered device, PoD << 50% from a 0 to 90% False Alarm Rate as shown in Figure 41.

63

Figure 41. Authentication test for (P||Y)F

This results in a high confidence that an individual suspect component would pass a normal electrical screening test and indicates that using only the fundamental as a PSF is not sufficient for authentication. However, by comparing the statistical delta between the

(P||Y)5th and (P||Y)7th parameter, a PSF with greater than a 90% PoD is created with only a 10% probability of a false alarm, shown in Figure 42.

64

Figure 42. Authentication test for (P||Y)5th Δ (P||Y)7th

In this scenario, a counterfeiter is forced to match an increasing number of non-linear characteristics in order to pass an authentication test. Increasing comparison parameters, or tests, yields movement up and left in the ROC curve space. A test with perfect discrimination (no overlap in the two distributions) has a ROC curve that passes through the upper left corner (100% sensitivity, 100% specificity). Since each of the parameters provides a degree of uncorrelated, non-linear behavior, the confidence boundary is limited only by the ability to fully measure an increasing parameter space. This means that

65 increasing the number and relative behavior between harmonic parameters allows the reduction of Type II errors while maintaining a low probability of Type I errors.

5.3 Unique Identification

Following a positive authentication test, the individual parameter measurements are then used as a unique identifier. Figure 43 shows the representative space of authentication parameters, 1 to N, each with their own probability distribution function (PDF), overlaid with two separate uniquely identified components, #1 and #2. Each component is assigned a unique ID array based on the amplitude measurement of the harmonic for each parameter,

AN. A number of classification methods, such as principle component analysis, support vector machines, and automated neural networks, can be used during this step to maximally separate the unique IDs arrays from each other [69].

It is important to do the unique identification step only after authentication so that any measurement uncertainty that is introduced during an absolute amplitude measurement is only included in the ID classifier and not the authenticity test, as the uncertainty measurement could skew results by diminishing the contribution of process variation.

66

Figure 43. Representative Unique ID Parameter space

67

Chapter 6 Process Specific DAC implementation

The previous sections systematically determined an authenticity and unique ID space.

Using process design kit data, a unique DAC was designed in a 90nm lower power process to exercise these traits and identify an initial reference to statistically monitor for reliability threats.

6.1 Process Considerations

A model for macro-level performance has been developed, and simulation at the device level in advance design tools has provided specific traits of interest, these being the frequency response and amplitude of the harmonic parameters. The behavior has been integrated with, and is dependent on, foundry process information for development of specific output traits. The current dependency described in Chapter 5 is generated by the transistor for each current source. More specifically, the FET operates across multiple regions defined by the geometry related potential of the device. Photolithographic manufacturing results in random Gaussian distributed variation in the transistor gain factor:

훽=휇푛퐶표푥푊/퐿3 and threshold voltage: 푉푇 [73]. Specifically, the current drive (bias) is fed with a distribution that correlates with the threshold voltage and Beta factor for the implemented device array. The device models utilized are provided by the individual chip

3 휇푛 is the surface mobility of transistor carriers, t표푥 is the thickness of the gate insulator 퐶표푥 is the transistor oxide capacitance (dependent on the thickness of the gate insulator t표푥 and ɛ, the permittivity of the gate insulator) , 푊 is the transistor channel width, and 퐿 is the transistor channel length. 68 manufacturing companies to capture the electrical behavior in the PDK. Circuits are fabricated at the foundry of the users choice, based on the PDK used, to get the intended/modeled statistical response of the circuit.

6.2 90nm DAC design and simulation

In order to provide comparison capability to the theoretical models, a 3-Bit binary DAC was designed in a low power 90nm process. The design operated at 푉퐷퐷 = 1.2푉 with

퐼퐿푆퐵 = 1푚퐴 and 푅퐿 = 50훺 as shown in Figure 44 along with the threshold distribution of the LSB cell over 100 simulations4. The threshold distribution deliberately differs for each current source to contribute to a change in the expected current.

(a) (b)

Figure 44. Basic Current Steering DAC (a) cell implemented and (b) threshold distribution in 90nm CMOS

4 The distributions shown are developed from 100 monte carlo simulations 69

5 The maximum theoretical output voltage swing, 푉푝푝, for the bit waveforms is 700mV .

Two separate sinusoidal input signals, at 331 MHz and 1.145 GHz were sampled with an ideal ADC to develop the inputs for the bit waveforms which were then clocked at 푓퐶퐿퐾 =

4 퐺퐻푧. The DAC was designed for minimal mismatch between sources in order to provide a centered baseline for future sensitivity test scenarios. The distribution for each resultant current cell is shown in Figure 45. The variation in the threshold for each cell contributes to current distribution for each of the cells, as well as the device geometry. The mean values of the statistical current distributions are used for simulation, therefore, the maximum swing is increased from theoretical as they are larger than the ideal values. This is shown in the figure by means of distribution of the output voltage along with the final output waveforms in Figure 46 and Figure 47.

(a) (b) (c)

Figure 45. 90nm DAC current cell distributions of (a) LSB, (b) MSB 2, and (c) MSB

5 푉푝 = 푅퐿 × (퐼퐿푆퐵 + 2퐼퐿푆퐵 + 4퐼퐿푆퐵) = 50훺 × 7푚퐴 = 350푚푉 70

Figure 46. DAC waveform output at 1.145 GHz

Figure 47. DAC waveform output at 331 MHz 71

It is noted that, although the signal was clocked at more than twice the fundamental frequency for each waveform, all of the quantization levels are not captured for each cycle of the DAC operating at 1.145 GHz. The DAC operating at 331 MHz is able to capture a much greater portion of the full resolution of the quantizer. The number of transitions of the bit waveforms captured determines the final output contribution. Therefore, it is expected that the statistical distribution of the resultant harmonics, being a sum of the individual waveform contributions, will be different for each of the input signals. This provides a variant on the challenge-response pair that is recommended for future work.

6.3 Authentication and unique ID against model

In order to capture a full set of harmonics in the fs/2 regime, a sine wave input signal with a frequency of 37 MHz was sampled and clocked at 푓퐶퐿퐾 = 4 퐺퐻푧. The full DAC spectrum, including the full scale input carrier signal power is show in Figure 48.

Figure 48. DAC output spectrum of 37MHz signal clocked at 4 GHz Full input/output

72

Figure 49. DAC Output expansion of 37MHz signal

This combination ensures a large space of non-distorted parameters for generating PSFs with a signal carrier power of -7.9dBm established for dBc references of harmonic powers.

The 90nm DAC spectrum is individually broken out in Figure 49. The harmonic spectrum contains distortion products at power levels relative to the first pass theoretical model, but unique to the implemented design based on variations in the MSBs current scaling. Figure

50 shows the individual simulation harmonic amplitude mapping overlaid with the variation induced parameter space. As expected, the design passes authentication for all parameters, but with a varying, unique, amplitudes in the statistical mapping.

73

Figure 50. 90nm CMOS design harmonic parameter distributions with individual design mapped for authentication

Since the design process has yielded one harmonic value per parameter, a z-test can be used for authentication testing. The statistical parameters for each harmonic parameter along with the 90nm design Z-score are shown in Table 5.

Harmonic Standard Experimental Z-Score, 푿 −µ 풁 = 푵 푵풐 Parameter Mean, µ푵풐 deviation, 흈푵풐 90nm Values, 푿푵 흈 Fundamental -0.1050 0.19 -0.3 -1.03 3h -35.41 7.61 -29.18 0.82 5h -34.33 5.92 -28.01 1.07 7h -34.04 4.42 -36.21 -0.49 9h -41.29 12.92 -52.55 -0.87 11h -36.71 7.49 -33.42 0.44

Table 5. 90nm statistical parameter data and as designed Z-scores 74

The individual simulated mapping produces a unique ID, as defined in Figure 43, for the introduced design, ID90nm={0.3, 29.18, 28.01, 36.21, 52.55, 33.42}. When the design goes to fabrication, each individual manufactured integrated circuit will produce a unique ID that can be classified with the methods noted earlier in the chapter. These IDs will be used in conjunction with further analysis on degradation mechanism in DACs to generate a temporal statistical window to add mitigation against recycled and reused components

6.4 130nm Hardware Characterization

A high performance current steering DAC from internally develop IP was leveraged to demonstrate PSF application on an implementation with SoA performance. This also provides a baseline for the impact of added security features to subsequent high performance design iterations. The DAC was implemented in in a 0.13μm SiGe BiCMOS process and occupies 6.25mm2.[74] An image of the die is shown in Figure 51.

Figure 51 Die Photograph of DAC [74]

The top 3 MSBs of the DAC were utilized for analysis purposes. The DAC was designed with minimal mismatch between current sources, and high resolution is obtained by precision calibration structures, implemented on a per cell basis. To demonstrate PSF

75 viability on this architecture, the calibration structures were disabled allowing the process variation and measurement uncertainty influence the output response. This allowed the exploration of a dynamic variability space to correlate authentication and unique ID PSF frameworks. Individual measurements were repeated across a changing input stimulus to collect 250 unique device measurements.

6.4.1 Test set up

A diagram of the test setup is shown in Figure 52. The DAC data was sent to the inputs from LVDS ParBERT channels at 3.35Gb/s. The ParBERT allows for precision input to be sent to each channel with phase alignment, removing timing errors on the clock data that could mask small variation effects [75].

Figure 52 Precision DAC Measurement Setup 76

6.4.2 Measurement resolution

For the application of PSFs, the predictable values of the harmonics must be in a state that can be measureable, even in the presence of noise. While the DAC’s operation itself does not introduce quantization noise, the finite resolution of the DAC results in inherent noise that ultimately sets the minimum noise floor [57]. In the previous analysis of the harmonic spectrum of the bits, the harmonic content of the quantization error waveform dependence on the input signal was found. Since this case dealt with a deterministic signal, the individual harmonic amplitude levels were able to be calculated. Through the process of constructing a PSF a wide range of variation contributions can be considered. Therefore, it is important for the designer to understand the cause and effect relationship of increasing variation in a design, making otherwise basic concepts, i.e signal to noise ratio (SNR), non- trivial. An analysis of the signal to noise ratio can also be analyzed for additional resultant error waveform contributions, since matching error between sources can be stochastic and deterministic [76]. The signal to noise ratio is represented as the power in the signal relative to the power in the noise in dB in the frequency of interest, from 0 to fCLK/2.

푃푠 푆푁푅 = 10 푙표푔 ( ) (64) 푃푒

Since the input challenge signal is a sinusoidal waveform, its maximum value is 2N(∆/2)

(peak value), where ∆ is the DAC step size, with signal power:

2 ∆2N 푃푆 = ( ) (65) 2√2

It is therefore necessary to understand the signal to noise relationship as compared to the signal to distortion products. For the sake of a cleaner, more meaningful analysis, it is

77 assumed that the worst case in the ability to resolve any harmonic content of the noise is when the error signal is modeled as an independent, additive white random signal [77].

This analysis gives the minimum SNR performance for a specific resolution DAC. The average of the quantization error or noise is found from the probability density of a uniformly distributed signal, shown as the error signal of a mid-rise quantizer with noise eQ approximated as an independent random number uniformly distributed between ±∆/2 in

Figure 53.

(a) (b)

Figure 53. Model of mid-rise quantizer (a) function and (b) quantization error signal [50][58]

The probability density function (pdf) p(eQ) is therefore shown in Figure 54.

p(eQ) PDF µ = 0 H = 1/∆

-∆/2 0 ∆/2

Figure 54. Probability density function of uniformly distributed quantization error

78

The total power of the error, Pe, is equal to its variance, therefore, since it is noted that the average is 0, the equation can be written,

∞ 2 2 2 ∆ 푃푒 = 휎푒푄 ∫ 푒푄 푝(푒푄)푑푒푄 = ( ) (66) −∞ √12

The SNR can now be written using (64) and reduced as,

2 ∆2N 2 2N ( ) ∆ 2 2√2 3 푆푁푅 = 10 log = 10 log ( 8 ) = 10 푙표푔 ( 22N) ≈ 6.02푁 + 1.76 (67) ∆ 2 ∆2 2 ( ) 12 ( √12 )

This is the result of the signal to noise ratio in the case where a full scale sinusoid is presented to the quantizer. This phenomenon is plotted against the previously calculated harmonics in Figure 55.

Figure 55. Harmonic content of DAC with overlaid white noise model

It can be seen, that capturing the total bandwidth (i.e. 0 to fs/2) all at once may not provide the information needed for the harmonic application space. Of course, this is not what is 79 typically done in measurement, but it does give a starting point for further investigation on characterizing the output. Treatment of the bandwidth restrictions for PSF resolution is further discussed in section 6.4.4

6.4.3 DAC Resolution vs Accuracy

Although the resolution for an N-bit DAC is set based on the number of quantization levels, the total accuracy of the DAC may exceed the resolution. In the previous analysis it was assumed that the quantization error could span from ±∆/2. If this amount is reduced, the contribution of error noise power would also be reduced, which would increase the effective accuracy of the quantizer. Likewise, the application of PSFs to a DAC design cannot be done without consideration of the variation effect on the resulting resolution.

There are typically offset and gain errors in DACs, however, these types of errors are easy to account for or remove, do not introduce non-linearity, and therefore have no effect on the spectral performance and consequently the accuracy of the DAC [78]. However, other characteristics such as Integral non-linearity (INL) and differential non-linearity (DNL), which are trivial to experienced DAC designers, become complex considerations, and define overall PSF boundary conditions when variation effects are increased. On the surface it may seem reasonable for a designer implementing a PSF to continue to increase variation until a high probability authentication test is defined. However, this could quickly result in a design that no longer behaves like a quantizer and merely produces erroneous output. For illustrative purposes, a 3-bit DAC example is shown in Figure 56. INL is defined as the deviation of the actual DAC output from the linear transfer characteristic at every code input. The INLmax is the worst value of the INL, where N is the number of bits

80 of the DAC. As seen, the INL directly reflects the static linearity of the DAC. Likewise, the differential non-linearity (DNL) is the deviation of the actual step size from the ideal step size (1LSB) between any two adjacent digital input codes. The DNLmax is the worst case of the DNL[40][52].

Figure 56. Static performance specs of a 3-bit DAC [40]

In general, the INL must be kept below 0.5 LSB to maintain a DNL below 1 LSB across the range. Simply stated, INL is the algebraic sum of the previous DNLs of each level leading up to that codes output value. The INL is therefore evaluated to determine the absolute accuracy of the DAC, N, by taking the maximum deviation from the linear transfer curve, 퐼푁퐿푚푎푥, compared to the FSR reference voltage, 푉푅퐸퐹 as,

푉푅퐸퐹 푁 = 퐿표푔2 ( ) (68) 2 x 퐼푁퐿푚푎푥

81

In turn, the signal to noise ratio of resultant spectrum from 0 to fs/2 is found by,

푉푅퐸퐹 푆푁푅 = 6.02 [퐿표푔2( )] + 1.76 (69) 2 x 퐼푁퐿푚푎푥

For example, if a reference voltage of 5V is desired for the output of the DAC, and a 3-bit

DAC is used to create the quantized output, the 퐼푁퐿푚푎푥 must be in the range ±0.5 x 5/2^3

= ±0.3125 V to maintain the DAC behavior. If this inserted back into the expression (68),

푉푅퐸퐹 5 푁 = 퐿표푔2 ( ) = 퐿표푔2 ( ) = 3 푏푖푡푠 (70) 2 x 퐼푁퐿푚푎푥 2 x .3125

If the 퐼푁퐿푚푎푥 is reduced to half , the result is,

푉푅퐸퐹 5 푁 = 퐿표푔2 ( ) = 퐿표푔2 ( ) = 4 푏푖푡푠 (71) 2 x 퐼푁퐿푚푎푥 .3125

And there is a 1 bit “accuracy” increase in the DAC. This also translates to an SNR increase,

푆푁푅푖푛푐푟푒푎푠푒 = 6.02푁+푐ℎ푎푛𝑔푒 = 6.02(4 − 3) = 6.02 푑퐵 (72)

Therefore, based on the level of the harmonic tones needing brought above the noise floor a specific accuracy of DAC can be calculated, regardless of the DAC resolution. Although this sounds good in theory, it may not be practical to further reduce the INL of the total swing of the DAC and still maintain a variation space that will provide statistically measureable results. However, the maximum INL drives the maximum amount of mismatch variation that can be induced between DAC cells to still maintain an N-bit operation. This design trade-space is critical for a designer to navigate when enhancing variation effects, should be carefully managed through PSF development, and is process specific as described analytically in Chapter 4 section 2. In the case where the resultant design of a PSF, due to process limitations, does not provide complete N-bit accuracy

82 resolution of output harmonic content, the ability to resolve the signal through characterization techniques becomes increasing important as a design consideration.

6.4.4 Oversampling Processing Gain

The frequency space of the DAC output can be divided up into sub-bands for characterization. In most applications the signal would be filtered to relax the bandwidth requirements away from the signal of interest. In the case of PSFs this is not a desired approach. In order to characterize each tone, a specific bandwidth can be chosen around the fundamental and harmonics to allow reduction of the noise floor to reveal tones of interest. In order to find the maximum bandwidth that would allow a signal of interest to

“be seen”, and equation for processing gain is developed [52].

f SNR = 6.02N + 1.76 + 10 log s , over bandwidth (BW) (73) 10 2 ∙ BW

From a theoretical standpoint this creates an oversampling condition on the signal space and therefore a reduction in the noise power.

6.4.5 Spectrum Analysis Considerations

Analog spectrum analysis can be done to measure the output of a DAC similar to the FFT process used in analog to digital converter characterization. A spectrum analyzer used to measure noise or distortion performance of a DAC should have at least 10 dB more of dynamic range than the test article [40][79]. The analyzer’s resolution bandwidth (RBW) must be set to a level where the harmonic products resolve above the noise floor. The spectrum analyzer can be used to measure the SNR of the DAC with correction factors taken into account. The process to do this is by first measuring some point in the spectrum that is suspected to be free of harmonic content. This measurement is shown as the S/NF 83 in Figure 57. The actual SNR, over the DC to fs/2 bandwidth can then be found by subtracting the process gain [47].

This full analysis is shown in Figure 57 along with the measurement of the odd ordered

DAC harmonics, up to the 11th harmonic, by sweeping the spectrum analyzer RBW across the spectrum.

Figure 57. DAC distortion and SNR spectrum analysis measurement [47]

6.4.6 Implementation example

The methods and consideration brought forward in the previous section were applied to a case example in order to show the decision flow process on how to obtain measurements for the number of harmonics needed to satisfy the identified PSF. A basic design trade space exercise is implemented to show a decision flow. To start, a nominal clock frequency of 1 GHz is chosen. A binary 3-bit DAC current steering architecture is selected which allows the setting of the maximum frequency for finite resolution of 1/25th of the clock 84 frequency. Therefore, in order to adhere to the prime sampling condition identified in chapter 3 [58],

푓 푝 푠푖𝑔 = ≤ 0.04, 푝 ≠ 푚푞, 푝, 푚, 푞 ∈ 푍 (74) 푓퐶퐿퐾 푞

Also, in order to support the ease of analysis, an LSB current of 0.9mA is chosen for the

DAC terminated with a 50Ω resistor. This provides a peak-peak amplitude of 630mV for an output power of 0dBm. The input signal is therefore,

푥(푡) = 0.316푉표푙푡푠 푠푖푛(2휋 ∗ 39푀퐻푧 ∗ 푡) (75)

The data is clocked with an NRZ pulse, and assuming that a ±0.5 LSB INL is maintained, the spectrum from 0 to fs/2 is shown in Figure 58.

NRZ Hold Distortion 0dBm 1

-10dBm

Theoretical RMS Quantization Noise floor -20dBm

3 -30dBm 5 7 11

-40dBm Spectrum 9 -50dBm

-60dBm

-70dBm 39MHz 117MHz 195MHz 273MHz 351MHz 429MHz fCLK/2 f

Figure 58. DAC Output spectrum, Bandwidth from 0 to fs/2

85

In order to get the noise floor below the lowest spur, a processing gain of at least 30 dB is required. In order to see the signal, additional margin, of at least 10 dB should be added.

Therefore,

푓 푃푟표푐푒푠푠푖푛푔 퐺푎푖푛 = 10 푙표푔 푠 = 40 푑퐵 (76) 10 2 ∙ 퐵푊

Therefore,

9 푓푠 40 1 퐺퐻푧 40 1 푥 10 퐻푧 = 1010 → = 1010 → 퐵푊 = = 50 푘퐻푧 (77) 2 ∙ 퐵푊 2 ∙ 퐵푊 2 ∙ 1 푥 104

And the final measurement spectrum is shown in Figure 59.

NRZ Hold Distortion 0dBm 1

-10dBm

Theoretical RMS Quantization Noise floor -20dBm

3 Processing Gain -30dBm 5 7 11

-40dBm Spectrum 9 -50dBm

-60dBm Spectrum Analyzer Noise floor RBW=50 kHz -70dBm 39MHz 117MHz 195MHz 273MHz 351MHz 429MHz fCLK/2 f

Figure 59. DAC output measurement spectrum with resolution bandwidth of 50 kHz

86

6.5 Hardware Measurement Collection and Processing

Utilizing the design, measurement, and sampling constraints imposed by PSF implementation, measurement data was collected on the 0.13um DAC with settings identified in Table 6. The combined MSBs allowed for a 3-bit DAC architecture measurement with a non-de-embedded full scale range signal power of -9.5 dBm.

Type Description Value/Notes Frequency f 10 Mhz – 1.665 Ghz Sampling fs 3.35 GHz Bins Input varied by prime. 163, 317, 653, 1637, 3271. Measurements Power 5516 measurements per device. 6,895,000 fields. Frequency Fundamental Harmonic Freq: 163: 33328247. 317: 64816284. 653: (Bins) Prime/16384 * fs (Hz) 133517456. 1637: 334713745. 3271: 668999093. Power 1250 measurements per harmonic *Ranges in next table (odd fundamental through 11th harmonic) Devices 250 unique measurements Each device capture per bin assignment

Table 6. Input and Measurement Data

An image capture of the spectrum analyzer data for a one tone sin input at 33.3 MHz, clocked at 3.35 GHz is shown in Figure 60.

Figure 60. Spectrum analyzer data measured on DAC, f=33.3 MHz and fs=3.35 GHz

87

The data across all of the measurement settings was collected in this manner and merged, joined, and concatenated using a collection of utilities shown in Table 7.

Platforms Use (packages) Python (Anaconda/Spyder) Merge, join, and concatenate Knime Exploration/quality assessment RapidMiner Exploration/quality assessment Excel Quality

Table 7. Initial data collection, exploration, and quality assessment tools

6.5.1 Authenticity verification

Harmonic powers were selected by calculating the fundamental and odd harmonic frequencies and factoring the minimal distance (absolute values) between projected harmonic frequency and measured frequency from the spectrum analyzer output. There were no observed outliers for the combined data set. The total mean of the data set, across all devices and challenge frequencies is shown in Table 8. The mean value of the measurement distribution was calculated by harmonic across the 1250 measurements.

Measured dBc Theoretical Harmonic Mean (dBm) mean Values dBc Values Fundamental -9.83 0.33 0.4 3h -36.95 27.45 28 5h -39.02 29.52 29 7h -44.68 35.18 36 9h -61.29 51.79 49 11h -45.14 35.64 36

Table 8. Measured vs. Theoretical mean values of harmonic data

88

6.5.2 Unique ID Classification

Principal Component Analysis (PCA) was used to classify all 1250 device measurements in order to provide evidence of a unique ID methodologies for each of the individual 250 devices. Transforming the data and retaining values of each harmonic, for each device, created unique identification spaces. PCA transformation is sensitive to the relative scaling of the original variables, and data column ranges were normalized before applying

PCA through transformation [80]. Harmonic values were normalized using Z-score

(Gaussian) transformation where the mean was equal to 0.0 and the standard deviation was 1.0. The Z-score transformed data is shown in Table 9.

Harmonic Lower Bound Upper Bound Fundamental -1.896 .864 3h -1.427 1.047 5h -1.126 1.637 7h -1.95 1.062 9h -1.627 1.352 11h -1.909 1.135

Table 9. Z-Scored Transformed Harmonic Data

A covariance matrix was constructed after normalizing the harmonic values. The matrix is an n*n matrix where n was equal to the number of dimensions, six, of the harmonic powers.

89

The covariance for two dimensions, X and Y, is as follows:

∑푛 (푋 − 푋̂)(푌 − 푌̂) 퐶표푣(푋, 푌) = 푖=1 푖 푖 (78) 푛

Therefore, the covariance matrix can be defined as,

푛∗푛 ∑ = (퐶푖,푗, 퐶푖,푗) = 퐶표푣(퐷푖푚푖, 퐷푖푚푗) (79)

Where Dimx is the xth dimension of the matrix. The matrix values are shown in

Table 10.

Row ID Fund 3rd dB 5th_dB 7th_dB 9th_dB 11th_dB Fund 1 0.916 0.765 0.976 0.402 -0.229 3rd_dB 0.916 1 0.876 0.864 0.225 -0.309 5th_dB 0.765 0.876 1 0.811 0.513 0.155 7th_dB 0.976 0.864 0.811 1 0.576 -0.015 9th_dB 0.402 0.225 0.513 0.576 1 0.761 11th_dB -0.229 -0.309 0.155 -0.015 0.761 1

Table 10. Z-Score Transformation, covariance matrix in tabular form

90

Using the new transformation space created, eigenvalues and eigenvectors were then calculated using the six-dimensional covariance matrix as,

∑ = 푈 ∗ 훬 ∗ 푈−1 (80)

Where the eigenvalue matrix 훬 is diagonal and U is the eigenvector matrix of Σ. These eigenvectors and eigenvalues provide information about the patterns in the data. The eigenvector corresponding to the largest eigenvalue was the first principal component and corresponds to the direction of the component with the most variance. Since ‘n’ eigenvectors were derived, there were ‘n’ principal components, shown in Table 11. The principal components were ordered from high to low based on their eigenvalue. The first principal component (0) represented the largest amount of variance between the traces [81].

Principal Component Upper Bound Lower Bound 0 2.418 -3.477 1 1.878 -2.548 2 .947 -0.749 3 .339 -.501 4 .112 -0.12 5 .058 -0.054

Table 11. Harmonic Principal Component Upper and Lower Bounds

91

Thus, with a unique space created for each device, they are plotted in PCA space across all baseline fundamental frequencies, i.e. bins in Figure 61.

Figure 61. PCA: All Bins and Devices

It is clear to see that a “dot” exists for each individual device, and that device exists within each of the unique input frequency spaces. This space is only available through calculation of the PCA utilizing all of the harmonic frequencies. Only using a sub-set, or one harmonic did not provide enough uniqueness to identify a specific ID for each device. In order to show a unique mapping of a device in PCA space, each bin was plotted across the dimensional analysis in Figure 62-Figure 66. Device #204 is highlighted in each space to show the non-linear relationship between spaces. This means that there is a unique ID that is increasingly difficult to model and replicate.

92

204

Figure 62. PCA for Input Frequency 33.33 MHz

204

Figure 63. PCA for Input Frequency 64.82 MHz

93

204

Figure 64. PCA for Input Frequency 133.6 MHz

204

Figure 65. PCA for Input Frequency 334.7 MHz

94

204

Figure 66. PCA for Input Frequency 669 MHz

6.5.3 Changing amplitude mean through challenge-response

The PCA analysis provides the initial indication that a non-linear, unique behavior is occurring through the change of input, or challenge frequency to the device. This phenomenon, described as finite sampling in Chapter 2, can be exploited in order to change the mean amplitude of the resultant frequency.

To demonstrate this phenomenon, the mean value of each of the measured harmonics, fundamental through 11th is plotted against each other for an input (challenge) frequency of 33.3 MHz versus and input frequency of 668 MHz in Figure 67. Note that the largest change in mean value is in the 7th harmonic, with ~4dB of mean shift, with a unique delta in each harmonic shift.

95

Figure 67. Mean value of harmonic amplitude vs. input challenge frequency

6.6 PSF proficiency in Hardware

Modeling, design, characterization, measurement, and analytics have all shown an ability to map and prove out the PSFs concepts introduced in this research. Authentication and unique ID principles have been established on SoA hardware and through high resolution

DAC analysis. This chapter has systematically walked through a procedure for completing authentication and unique ID tests that can be carried forward for reliability analysis. It is extremely critical to note that each of the implementation examples and concepts shown through this work have many additional variants that can be applied through design, challenge-response pairing, and through an increase in parameter space to improve analysis and algorithm development for PoD in AMS ICs.

96

The framework outlined in this Chapter leads directly to the question of how an authentication process would be initiated if the user did not have control or influence on the design. This situation helps shed light on the true power of the PSF framework, and specifically the framework for DACs. If a part “shows up”, any additional information about the component could be used to exercise the model to develop a challenge response behavior. One such way of getting information is to pull specific process control monitor data (PCM) that is collected on every foundry run. A date lot code on the part could be referenced to collect the process data, or in a worst case scenario, the part could be destructively analyzed to recreate a model for the hardware. This information can be directly injected back in the model to provide authentication as well as determine if any modifications have been made to the architecture with clear confidence and probability based on the number of samples available.

97

Chapter 7. Application of PSF Framework for Reliability

The previous chapters systematically determined an authenticity and unique ID design and characterization space for AMS integrated circuits. Once an IC is authenticated and uniquely identified, its parameters can then be used to statistically monitor for reliability threats, e.g. recycled and reused components.

7.1 PSF for Reliability Framework

To capture these aging effects, Figure 68 adds a temporal component for Parameters 2 to

N, relative to the original measurement distribution determined by process variation. This relationship captures the degradation due to physical effects including Hot Carrier Injection

(HCI), Time Dependent Dielectric Breakdown (TDDB), Bias Temperature Instability

(BTI), and Electro-migration [82]. Therefore, each of the circuits, 1-2-3, will maintain a predictable, relative, relationship to each other when they are of the same pedigree, shown as the distribution mean changing when operated under the same conditions over Δt.

Figure 68. Unique Process Specific Function reliability analysis structure

98

These properties constitute a formal reliability signature, such that any IC within the original distribution can then be characterized to obtain information on remaining life or current time in usage, subject to meeting performance requirements. Parameter 2 is expanded on as a reliability signature showing measureable change over time, t0 to t4, relative to the original measurement distribution in Figure 69. Figure 70 shows that behavior from each circuit is expected to remain consistent relative to other ICs of the same pedigree, with a changing mean over time, to reflect the tracking of the reliability signature.

Figure 69. Reliability signature parameter over time relative to mean at t0

Figure 70. Reliability signature parameter with changing distribution mean over time 99

7.2 CMOS DAC Cell Reliability Analysis

The ability for current, Id, to flow in a FET device is controlled proportionally by the supply, the potential presented across the device channel (i.e gate to source), Vgs, by the process specific threshold voltage, Vt, and geometry parameters 훽, previously described in

Chapter 6. Assuming that the supply voltage, or drain voltage of the device, Vd, is sufficient to keep it in the saturation region, operating as a switch, the proportional current relationship can be shown as [83],

1 2 I ∝ β(V − V ) (81) d 2 gs t

A hard breakdown across these terminals will cause a hard failure, however soft breakdowns will lead to lowering current source current, and reduced output signal amplitude, or a degraded capability to switch the source current to drive differential operation to the output load. These behaviors will cause the global reliability change introduced in Figure 68. An improved version of a standard DAC cell from a current steering architecture is shown in Figure 71.

The current cell is comprised of a cascoded current source (CS), a switch pair, and an output cascode. The current source defines the cell current drive and is cascoded to regulate the impedance seen by the switch pair. The switch pair are low-threshold thin oxide transistors (nominal VDD=1.2V), that switch the cell current between the differential output nodes and the output cascode transistor is a thick gate transistor (nominal

VDD=2.5V) that is used to regulate the impedance seen by the output node.

The architecture was implemented in a low power 90nm CMOS process and is used to demonstrate degradation effects through a step stress test strategy. 100

Figure 71. Improved current Steering DAC cell [50]

The step stress test strategy was to hold nominal (or very low) stress on the digital sub- circuit while applying increased stress and accelerated wear-out to the analog signal sub- circuit aspects of the DAC. The bias cascode and supply for the switch pair can be electrically tied together and used as gate voltages that stress thin-oxide transistors, while the output node and VOPCAS can be grouped as gate voltages that stress thick oxide transistors, both in the analog portion of the DAC. Stressing the thin-oxide devices was performed by stepping the switch pair and bias cascode, while holding other supplies nominal. Changes made to the thin oxide bias do not affect the thick-gate cascode transistor’s source voltage. VCS sets the current in the cascode and can be adjusted, but it’s foremost role is to be set to enable proper DAC output operation. Design simulation is first used to determine critical areas for targeting of accelerated reliability testing. 101

Proprietary foundry models were utilized for model execution, therefore only simulation results are presented. Additionally, hardware characterization data is provided in areas identified as key for reliability monitoring. Two primary means were found to increase accelerated wear-out of the DAC analog sub-circuit to model predictive lifetime: 1) increased temperature stress and 2) increased voltage stress. The increased voltage stress can either be applied separately or together to predominantly thin gate-oxide transistor circuits or thick gate-oxide transistor circuits of the analog sub-circuit section of the DAC.

7.3 Step-Stress Reliability Characterization

Electrical DC, RF, and thermal testing of DAC devices requires a medium complexity

Device Under Test (DUT) board, shown in Figure 72, and supporting mechanical hardware to support several low noise analog power supplies (4), digital power supplies (2), high speed clock, digital serial data interface, RF Output and thermal interface to the device.

Figure 72. Analog Mixed Signal Reliability DUT board

102

The DUT package is a flip-chip BGA design with the back-side of the die exposed. This packaging arrangement led to a hot finger design where the DUT itself is mounted on the back side of the DUT board, such that when the DUT board is mounted within its mechanical carrier, the DUT makes physical contact with the hot finger with the board’s

DC and digital electrical connections occurring from below. Gigahertz RF signal inputs and output all come from the top side of the board.

The DUT board is characterized with a single channel test station that accurately replicates the thermal and electrical interfaces of a commercial 96-channel lifetest station. All lifetest module characterization and individual part step-stressing was performed on this single channel test station. A simplified block diagram of the single channel test station is shown below.

Figure 73. Reliability single channel test station [84]

103

7.3.1 First-pass DAC reliability analysis

While all of the transistors from the current steering DAC cell architecture, introduced in

Chapter 4, will suffer from degradation under these stresses, initial investigation identified the switch pair as the sub-structure that will contribute to the majority of changes to PSF parameters. The current sources array devices need to be set-to a nominal value for the analog portion of the output current source to produce the correct voltage level for its designed bit waveform. Therefore, these devices are typically mirrored from a nearby bias device, and any effective reliability change in threshold is countered to maintain bias.

When the switch pairs fail, the current source may be unable to turn ON or OFF, hence

DAC codes would not be properly rendered in aggregate, with an expected failure of the spurious free dynamic range (SFDR) needed to maintain the DACs resolution. In soft breakdown this means there is an increased turn-on, turn-off time due to threshold and oxide breakdown induced variation through RC time constant delay by decreased mobility

(increased resistance, Ron) and increased capacitance (CLoad). This soft breakdown will cause the bit waveforms to lose their sharp edge due to the decrease in the high frequency content of the signal, when the output becomes more sinusoidal, as shown in Figure 74.

Figure 74. Softening of bit waveform through increased stress 104

Each of the highlighted reliability impact areas were investigated through simulation.

Switch pair reliability was subsequently analyzed by full hardware characterization.

7.3.2 Bias Cascode Transistor (M2) Failure Mode:

Investigation of the bias cascade transistor shows that if Vgd, Vgs or Vgb breakdown, then current will flow from gate to the respective node, resulting in a lowering of gate voltage and reduction in CS cascode current. A hard breakdown will cause a hard failure, however soft breakdowns will lead to lowering current source current, lower gain and reduced RF amplitude. Highest field cases will breakdown first, and any node to breakdown will cause a similar effect. An increased supply stress of up to 2.3 V was applied along with an elevated device junction temperature of 150C. In this case Vbg is the worst-case field region and estimated lifetime of M2 at 150C, at VCAS=2.3V is >35KPOH. In the CMOS process being utilized the thin-oxide FETs of the M1 and M2 transistors are rated as high as 1.3V, and do not see voltages this high under stress due to the headroom drop above them. As expected from the theoretical analysis, these devices are not a reliability driver, and their effects will not be indicators for changes in PSF signatures. A summary of the simulation test conditions and results in shown in Table 12 and Figure 75.

Worst Case Breakdown Measured Temp Time to

Bias Threshold Characteristics Acceleration Breakdown

VDD=2.3V 14 A Oxide Vgd, Vgs Vgb 150C 35,297 hrs

Table 12. Bias Cascode Transistor Time to Breakdown analysis

105

Figure 75. Cascode Bias device voltage simulation

7.3.3 Switch Pair (M3,M4) breakdown failure mode:

When the switch pairs fail, the current source may be unable to turn ON or Off, hence DAC codes would not be properly rendered in aggregate, with an expected change in SFDR signature, since the waveform would have anomalous codes represented in the output waveform. Since these are switches in use, if the gate-to-drain voltage, Vgd, or Vgs breakdown, there is conduction to drain and source respectively. In soft breakdown, this may increase turn on or turn off time. However, if the gate breakdown is hard and the switch is inoperative, then the part will also fail with high SFDR. In this case Vgb is the worst-case field region. Under simulation, the part demonstrated function at high bi,퐴 = 106

̅ 푏푖, 푏푖=2.3V with Vgb=2.3V at 150C. The estimated lifetime of M3 and M4 under this condition was >1hr. However, by limiting D to 2.0V, in order not to over-stress thin-oxide devices, M3 and M4 at 150C still fail early at ~115hrs. This means that this stress conditions is a viable candidate to cause early failures. Switch pair transistors can be used for accelerated wear-out, however to get them to fail they must be greatly accelerated. A summary of the simulation results is shown in Figure 76 and Table 13.

Figure 76. Switch Pair voltage simulation values

107

Stress Breakdown Measured Temp Time to

Bias Threshold Characteristics Acceleration Breakdown

VDD=2.3V 14 A Oxide Vgd, Vgs Vgb 150C 1 hrs

VDD=2.0V 14 A Oxide Vgd, Vgs Vgb 150C 114.62 hrs

Table 13. Switch Pair Failure Analysis

Increased temperature and voltage conditions were applied to the DAC in order to empirically collect and monitor reliability indicators. A summary of the device testing is shown in Figure 77. The DAC supply voltage was stepped across the range from 1.5 V to

3V in order to try and force a hard failure. It is clearly seen that under the increasing stress condition the 1st harmonic SFDR characteristic maintains a nominal value, only slightly decreasing through increased supply. An interesting characteristic occurs once the voltage is returned to nominal supply at approximately the 10000 second mark, in the blue circled area. It should be noted that there is a slight increase in 1st harmonic SFDR from 45 to 48 dB at this point. As the supply is increased, an initial drop back to the stable SFDR is restored. However, increasing supply starts to cause additional degradation and the SFDR continues to rise, until reaching failure at the ~5 hr. mark. This effect is a soft failure though, as once the voltage is restored to nominal, the SFDR raises again, this time however to an even higher level at the 53dB point, where it was before soft failure, in the green circled area. This clearly indicates that the increased voltage stress, coupled with the constant elevated temperature creates a reliability signature effect that represents permanent soft degradation. This effect is representative of the expected behavior from degrading switch speed. 108

Figure 77. Switch Pair Measured SFDR Reliability Analysis

7.3.4 Reliability Summary

Through the hardware reliability analysis it was shown that the PSF space will in fact change by altering the statistical relationships between the harmonic frequency parameters.

Increasing SFDR was demonstrated as an excellent indicator of degradation in the current steering DAC cell architecture. Figure 78 shows the temporal change in the harmonic content.

109

Figure 78. Harmonic parameters changing over time, t0 to t4, through degradation

This demonstrates that power from the high frequency harmonics is shifted to the lower frequency harmonic, changing each harmonics mean and distribution, the SFDR, and consequently the authentication PSF. This physical method of monitoring provides a foundational starting point for empirical analysis, and the ability of characterization space to determine a recycled/re-used component. The PSF parameter space provides a significant increase in the fidelity of degradation through pure delay measurements as shown in previous work [24][26], since the parameter space has sensitivity and selectivity to the individual harmonics and is not dependent on precision delay measurements. The result of these phenomenon will cause an expansion of the separation of nominal to degraded parameter statistics as shown in Figure 79. The switch pair soft breakdown, targeted as a first step reliability driver, will cause the parameter source to change means

110 from time t0 to t4, with a separation in distribution, highlighted through increasing separation across die 1,2,3. The expected measured value of the reliability signature parameter is illustrated over time in Figure 79. This measurement space provides the capability to relate the current state of the IC to its projected lifetime. This would represent one or a grouping of random, multi-variant, harmonic distributions from the DAC spectrum. The behavior from each circuit is shown relative to other ICs of the same pedigree, with changing statistics over time, to reflect the tracking of the reliability signature previously shown in Figure 68. The IC will be considered at the end of life when it breaches the failure specification. Any IC within the original distribution can then be characterized by measuring the reliability signature parameter to obtain information on remaining life or current time in usage.

Figure 79. Reliability signature parameter measured over time

111

Figure 80 shows visually what the expected plot would look like from measuring a randomly chosen, previously authenticated, IC at any time from fabrication, conceptually t0, to end of life. All of the previously described phenomenon result in a final harmonic space that has changing means and distribution based on shifting of power in the harmonic parameter space, but relative to each unique design and/or component.

Figure 80. Reliability signature parameter measured on any initially authenticated part

112

Chapter 8. Conclusion and Future Work

A novel Process Specific Function approach was introduced and demonstrated, providing a foundational hardware assurance methodology to authenticate AMS ICs and provide traits for unique ID and reliability monitoring.

8.1 Research Summary

The research accomplishments demonstrated in this work were focused specifically on a critical component in a previously unrepresented space of hardware security. A DAC was designed, derived and modeled to exploit PSF behavior. Simulation capabilities showed predictable circuit traits, including random process variations for authentication and unique

ID. The model showed 90% PoD and less than 10% False alarm rate for an individual process specific cloning scenario, showing foundational design capability for AMS counterfeit prevention and identification. The work makes significant progress towards quantifying design specific authentication behavior for the first time in analog ICs. A statistical framework was derived and applied for varying sample sizes, to include the situation where there is a need to determine whether or not one individual part is authentic.

A SoA high resolution DAC was configured for a 3-bit PSF and demonstrated proof of concept in authentication and unique ID. Exploration of reliability response of 90nm DAC cell was undertaken with results showing a traceable signature for application of PSF to recycled and reused components. The foundational framework described provides a

113 technique to evaluate the trustworthiness of mixed signal integrated circuits from untrusted sources. The technique does not require any external circuits, which increases confidence on the usable part, and reduces overall required area. Specifically, this technique provides a 3X reduction in area as compared to current state-of-the-art techniques of memory based authentication on a cell by cell basis. This is the first technique to provide a design space in the analog mixed-signal domain for generating a secure hardware block addressing authentication, identification, and reliability signatures through statistical representation of measureable deterministic behavior, not just digitally quantified difference vectors.

8.2 Future Work

The statistical parameters identified in this research can be carried forward to build a PoD model across different foundry processes, with hardware measurements used for classification of unique IDs and reliability monitoring. Therefore, the PSF framework application space can be extended to other components such as power amplifiers, voltage controlled oscillators, mixers, etc. Additionally, the PSF framework can be implemented into a scalable architecture, where additional challenge and operation signals can be used to authenticate and ID communication or IoT nodes across cascaded components.

Extension of the statistical framework should focus on the addition of more complex input stimulus, including multi tone signals and modulated signals, should be leveraged in an effort to raise the confidence and reduce Type I and Type II errors. Additions to the challenge–response behavior that take advantage of the limited clock and sample relationship, and traits such as intermodulation distortion (IMD) and adjacent channel power ratio (ACPR) should be explored to develop additional nonlinearities.

114

Bibliography

[1] (2011, August 27). IC market to eclipse $300 billion in 2013. [Online] Available: http://www.digikey.com/purchasingpro/us/en/articles/semiconductors/ic-market- to-eclipse-300-billion-in-2013/1027

[2] (2016, September). Continuing to grow: China’s impact on the semiconductor industry 2016 update. [Online] Available: http://www.pwc.com/gx/en/technology/chinas-impact-on-semiconductor- industry/continuing-to-grow.jhtml

[3] U. Guin, D. Dimase, and M. Tehranipoor, “Counterfeit Integrated Circuits: Detection, Avoidance, and the Challenges Ahead”, Journal of Electronic Testing: Theory and Applications, v.30 n.1, p.9-23, Feb 2014

[4] Hearing before the Senate Committee on Armed Services. Counterfeit Electronic Parts in the U.S. Military Supply Chain. Nov. 8, 2011 [5] U. Guin et al. “Counterfeit Integrated Circuits: A Rising Threat in the Global Semiconductor Supply Chain,” Proceeding of IEEE, vol. 102, no. 8, pp. 1207- 1228, Aug 2014

[6] Defense Microelectronics Activity, [Online] Available: http://www.dmea.osd.mil/library.html

[7] Maynard. S. “Trusted Manufacturing of Integrated Circuits for the Department of Defense”, NDIA Manufacturing Division Meeting. October 28, 2010

[8] Ortiz, C. “DoD Trusted Foundry Program, Ensuring Trust for National Security & Defense Systems”, NDIA Systems Engineering Division Meeting. June 20, 2012.

[9] IBM Press release “GLOBALFOUNDRIES to Acquire IBM's Microelectronics Business” October 20, 2014. http://www- 03.ibm.com/press/us/en/pressrelease/45110.wss

[10] Cassell J (2012) Reports of counterfeit parts quadruple since 2009. Challenging US Defense Industry and National Security. http://press.ihs.com

115

[11] A. Antonopoulos, C. Kapatsori, and Y. Makris, “Security and Trust in the Analog/Mixed-Signal/RF Domain: A Survey and a Perspective,” In Proceedings of the 2017 22nd IEEE ITest Symposium (ETS), Limassol, Cyprus, 22–26 May 2017; pp. 1–10. [12] IHS ISuppli press release, “Top 5Most Counterfeited Parts Represent a $169 Billion Potential Challenge for Global Semiconductor Market,” April 4, 2012: http://www.isuppli.com/Semiconductor-Value-Chain/News/Pages/Top-5-Most- Counterfeited-Parts-Represent-a-$169-Billion-Potential-Challenge-for-Global- Semiconductor-Market.aspx.

[13] Sen. C. Levin et al., “Inquiry Into Counterfeit Electronic Parts In The Department Of Defense Supply Chain,” U.S. Senate Committee on Armed Services, Rep. 112- 167, May 21, 2012.

[14] J. Villasenor and M. Tehranipoor, “The Hidden Dangers of Chop-Shop Electronics” IEEE Spectrum, vol. 50, no. 10, pp. 41-45, 2013.

[15] M. Pecht and S. Tiku, ‘‘Bogus: Electronic manufacturing and consumers confront a rising tide of counterfeit electronics,’’ IEEE Spectrum, vol. 43, no. 5, pp. 37–46, May 2006.

[16] M. M. Tehranipoor, U. Guin, S. Bhunia, “Invasion of the Hardware Snatchers: Cloned Electronics Pollute the Market, ” in IEEE Spectrum, v. 54 Issue 5, May 2017, p.36-41

[17] (2015, AUG 08). Watch GPS Attacks That Can Kill DJI Drones Or Bypass White House Ban.; [Online] Available www.forbes.com/sites/thomasbrewster/2015/08/08/qihoo-hacks-drone-gps/

[18] (2015, AUG 07). Hacking A Phone's GPS May Have Just Got Easier.; [Online] Available https://www.forbes.com/sites/parmyolson/2015/08/07/gps-spoofing- hackers-defcon/

[19] M. Jin et al., "Reliability characterization of 10nm FinFET technology with multi- VT gate stack for low power and high performance", 2016 IEEE International Electron Devices Meeting (IEDM), pp. 15.1.1-15.1.4, 2016.

[20] Y. Neirynck and D. Vernikovsky, “The criticality of sub-components utilized for next-generation high-volume manufacturing,” 28th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC), pp 372-375, 2017.

[21] R. Maes and P. Tuyls, Secure Integrated Circuits and Systems, I. Verbauwhede, Ed. New York: Springer, 2010. 116

[22] R. S. Pappu; B. Recht; J. Taylor; N. Gershenfeld, “Physical one-way functions,” Science vol. 297, pp. 2026-2029, Sep. 2002

[23] A. Sadeghi, D. Naccache (Eds). Towards Hardware-Intrinsic Security, Foundations and Practice. (Springer, Heidelberg, 2010) [24] R. Kumar, V. Patil, and S. Kundu, “On Design of Temperature Invariant Physically Unclonable Functions based on Ring Oscillators,” in Computer Society VLSI, 2012 IEEE Annual Symposium on, 2012, pp. 165-170

[25] G. Suh and S. Devadas, ‘‘Physical unclonable functions for device authentication and secret key generation,’’ in Proc. Design Automation Conf., 2007, pp. 9–14.

[26] Wendt, J.B.; Koushanfar, F.; Potkonjak, M., "Techniques for foundry identification," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE , pp.1,6, 1-5 June 2014 [27] Kumar, R.; Burleson, W., "On design of a highly secure PUF based on non-linear current mirrors," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , pp.38,43, 6-7 May 2014

[28] Langley, L.E., "Specific emitter identification (SEI) and classical parameter fusion technology," WESCON/'93. Conference Record, pp.377,381, 28-30 Sep 1993 [29] Remley, K.A.; Grosvenor, C.A.; Johnk, R.T.; Novotny, D.R.; Hale, P.D.; Mckinley, M.D.; Karygiannis, A.; Antonakakis, E., "Electromagnetic signatures of WLAN cards and network security," Signal Processing and Information Technology, 2005. Proceedings of the Fifth IEEE International Symposium on , vol., no., pp.484,488, 21-21 Dec. 2005

[30] Ming-Wei Liu; Doherty, J.F. "Specific Emitter Identification using Nonlinear Device Estimation", Sarnoff Symposium, 2008 IEEE, On page(s): 1 – 5

[31] W. Suski, II, M. Temple, M. Mendenhall, and R. Mills, “Radio frequency fingerprinting commercial communication devices to enhance electronic security,” Int. J. Electronic Security Digital Forensics, vol. 1, no. 3, pp. 301–322, 2008.

[32] W. Cobb, E. Laspe, R. Baldwin, M. Temple, and Y. Kim, “Intrinsic Physical layer Authentication of Integrated Circuits,” IEEE Transactions on Information Forensics and Security, vol. 7 no. 1, pp. 14-24, Feb. 2012

117

[33] Maricau, E.; Gielen, G., "NBTI model for analogue IC reliability simulation," Electronics Letters , vol.46, no.18, pp.1279,1280, September 2010

[34] Y. Kim, H. Shim, M. Jin, J. Bae, C. Liu, S. Pae, "Investigation of HCI effects in FinFET based ring oscillator circuits and IP blocks", Reliability Physics Symposium (IRPS)2017 IEEE International, pp. 4C-2.1-4C-2.4, 2017, ISSN 1938-1891.

[35] MIL-HDBK-217F: “Military Handbook - Reliability Prediction of Electronic Equipment,” December, 1991: http://www.sre.org/pubs/Mil-Hdbk-217F.pdf.

[36] Sutaria, K.; Velamala, J.; Yu Cao, "Multi-level reliability simulation for IC design," Solid-State and Integrated Circuit Technology (ICSICT), 2012 IEEE 11th International Conference on , pp.1,4, Oct. 29 2012-Nov. 1 2012

[37] I. Polian, “Security Aspects of Analog and Mixed-Signal Circuits,” in IEEE International Mixed-Signal Testing Workshop (IMSTW), 2016, pp

[38] S. K. Mathew et al."A 0.19pJ/b PVT-variation-tolerant hybrid physically unclonable function circuit for 100% stable secure key generation in 22nm CMOS," ISSCC, 2014 IEEE pp.278,279, 9-13 Feb. 2014

[39] C. Helfmeier, C. Bolt, D. Nedospasov, and JP Seifert, "Cloning Physically Unclonable Functions," Hardware-Oriented Security and Trust (HOST), 2013 IEEE International Symposium on , pp.1,6, 2-3 June 2013

[40] R. Baker, CMOS: Circuit Design, Layout, and Simulation, 3rd Ed. Wiley-IEEE Press, 2010

[41] M. Casto, B. Dupaix, W. Khalil, Mixed Signal Process Specific Function, Application No. 15/729,87, Patent Pending, October 2017

[42] L. Xin, L. Peng, X. Yang, and L. Pileggi, “Analog and RF Circuit Macromodels for System-Level Analysis,” in Proceedings of DAC, June 2003, pp. 478–483.

[43] W. Zhao et al. “Rigorous Extraction of Process Variations for 65-nm CMOS Design” IEEE Transactions on Semiconductor Manufacturing, Vol 22, No.1, Feb 2009. [44] L. Goncalves, A. Subtil, M. R. Oliveira, P. de Zea Bermudez “ROC Curve Estimation: An Overview,” REVSTAT, Statistical Journal Volume 12, Number 1, March 2014, 1–20 118

[45] S. Nassif, “Delay variability: Sources, impacts and trends,” in IEEE Int. Solid- State Circuits Conf., Dig. Tech. Papers, San Francisco, CA, Feb.2000, pp. 368– 369.

[46] Y. Ye, F. Liu, M. Chen, S. Nassif, Y. Cao, Statistical modeling and simulation of threshold variation under random dopant fluctuations and line-edge roughness, IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 19(6), 987–996 (2011)

[47] M. Abu Rahma; M. Anis, Nanometer Variation-Tolerant SRAM Circuits and Statistical Design for Yield, Springer, 2013

[48] Nikolic, B.; Ji-Hoon Park; Jaehwa Kwak; Giraud, B.; Zheng Guo; Liang-Teck Pang; Seng Oon Toh; Jevtic, R.; Kun Qian; Spanos, C., "Technology Variability From a Design Perspective," Circuits and Systems I: Regular Papers, IEEE Transactions on , vol.58, no.9, pp.1996,2009, Sept. 2011

[49] (2017, JUL 17). How a Bug in an Obscure Chip Exposed a Billion Smartphones to Hackers. [Online] Available: www.wired.com [50] Balasubramanian, Sidharth. "STUDIES ON HIGH-SPEED DIGITAL-TO- ANALOG CONVERSION." Dissertation. Ohio State University, 2013.

[51] Balasubramanian, S.; Boumaiza, S.; Sarbishaei, Hassan; Quach, T.; Orlando, P.; Volakis, J.; Creech, G.; Wilson, J.; Khalil, W., "Ultimate Transmission," Microwave Magazine, IEEE , vol.13, no.1, pp.64,82, Jan.-Feb. 2012

[52] W. Kester, The Data Conversion Handbook, Elsevier/Newnes, 2005, ISBN 0- 7506-7841-0

[53] S. Balasubramanian, V.J. Patel, and W. Khalil, “Current and Emerging Trends in the Design of Digital-to-Analog Converters” in Design, Modeling and Testing of Data Converters, Springer, ISBN 978-3-642-39655-7, Eds. Carbone, Paolo, Kiaei, Sayfe, Xu, Fang 2014

[54] R. M. Gray, “Quantization noise spectra,” IEEE Trans. Inf. Theory, vol.36, no. 6, pp. 1230–1244, 1990

[55] McDonnel, Samantha. “Compensation and Calibration Techniques for High Performance Current-Steering DACs.” Dissertation. Ohio State University, 2017.

119

[56] A. G. Clavier, "Distortion in a pulse count modulation system", AIEE Trans., vol. 66, pp. 989-1005, 1947.

[57] N. M. Blachman, “The Intermodulation and Distortion due to Quantization of Sinusoids,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 33, no. 6, pp. 1417–1426, Dec. 1985

[58] A. G. Clavier, P. F. Panter, D. D. Grieg, "PCM distortion analysis", Elect. Eng., pp. 1110-1122, Nov. 1947.

[59] W. R. Bennett, "Spectra of quantized signals", Bell Syst. Tech. J., vol. 27, pp. 446- 472, July 1948.

[60] L. Duncan, “A 10-bit DC-20 GHz Multiple-Return-to-Zero DAC with >48 dB SFDR.” Dissertation. Ohio State University, 2017.

[61] G. Chandra and A. Seedher, "On the Spectral Tones in a Digital-Analog Converter Due to Mismatch and Flicker Noise," Circuits and Systems II: Express Briefs, IEEE Transactions on , vol.55, no.7, pp.619,623, July 2008

[62] F. Taylor, Principles of Signals and Systems, McGraw-Hill 1994

[63] S. McDonnell, V. Patel, L. Duncan, B. Dupaix, and W. Khalil, “Compensation and Calibration Techniques for Current-Steering DACs,” Circuits and Systems Magazine, 2017 IEEE, Volume 17, Issue 2, pp.4-26

[64] I. Myderrizi, and A. Zeki, “Current-Steering Digital-to-Analog Converters: Functional Specifications, Design Basics, and Behavioral Modeling,” IEEE Antennas and Propagation Magazine, vol. 52, no. 4, pp. 197-208, Aug. 2010

[65] A. Keshavarzi et al., “Measurements and modeling of intrinsic fluctuations in MOSFET threshold voltage,” in Proc. Int. Symp. Low Power Electronics Design, Aug. 2005.

[66] Proprietary 90nm process model guide, Non-Disclosure Agreement restriction

[67] Steven R. Taylor and Hans E. Hartse, An evaluation of generalized likelihood ratio outlier detection to identification of seismic events in western China, Bulletin of the Seismological Society of America, August, 1997 87:824-831

[68] A. Papoulis, Probability, Random Variables, and Stochastic Processes 3rd Ed., McGraw-Hill, 1991 120

[69] D. Montgomery, Applied Statistics and Probability for Engineers 3rd Ed. John Wiley and Sons, 2003

[70] S. Theodoridis and K. Koutroumbas, Pattern Recognition, 4th ed. New York: Academic, 2009

[71] D. Dorfman and E. Alf, “Maximum-likelihood estimation of parameters of signal- detection theory and determination of confidence intervals—Rating-method data”, In Journal of Mathematical Psychology, Volume 6, Issue 3, 1969, Pages 487-496

[72] S. Yoder, S. Balasubramanian, W. Khalil, and V. Patel, "Accuracy and speed limitations in DACs across CMOS process technologies," Circuits and Systems (MWSCAS), 2013 IEEE 56th International Midwest Symposium on pp.868-871, 4-7 Aug. 2013

[73] M. Pelgrom, A. C. Duinmaijer and A. Welbers, "Matching properties of MOS transistors," IEEE Journal of Solid-State Circuits, vol. 24, no. 5, pp. 1433-1439, Oct. 1989.

[74] L. Duncan et. al. “A 10b DC-to-20GHz Multiple-Return-to-Zero DAC with >48dB SFDR,” Solid-State Circuits Conference (ISSCC), 2017 IEEE International. Page(s):286 – 287

[75] Agilent ParBERT, online: http://literature.cdn.keysight.com/litweb/pdf/5968- 9188E.pdf

[76] M. Gustavsson, J. Wikner, J. Jacob, T. Nianxiong, CMOS Data Converters for Communication, Springer, 2000

[77] J. van Engelen, Stability Analysis and Design of Bandpass Sigma Delta Modulators, Ph.D. thesis, Eindhoven University of Technology, 1999

[78] K. Doris, A. van Roermond and D. Leenaerts, Wide-Bandwidth High Dynamic Range D/A Converters. Springer, 2006

[79] Spectrum analysis fundamentals, Keysight Technologies

[80] I. Jolliffe Principal Component Analysis 2 ed. Springer, 2002

[81] J. Hogenboom. " Principal Component Analysis and Side-Channel Attacks - Master Thesis," Radboud University Nijmegen, The Netherlands. Aug 2010 121

[82] E. Maricau and G. Gielen, Analog IC Reliability in Nanometer CMOS, Analog Circuits and Signal Processing, Springer, New York 2013

[83] Sedra, A. S. & Smith, K. C. (2004). Microelectronic circuits (Fifth ed.). New York: Oxford. p. 552. ISBN 0-19-514251-9.

[84] (2017, 14 NOV) Automated Accelerated Reliability Test Station (AARTS) Available Online: http://www.accelrf.com/solutions/dc-htol

122