<<

Module 3: Electronics

2012-2013 Anne-Johan Annema

Translation: Yoeri Bruinsma Lian Xi

UNIVERSITEIT TWENTE. Contents

0 Introduction 7 0.1 Electronics ...... 7 0.2 Electronic systems ...... 8 0.3 A general electronic system ...... 8 0.4 Structure of the book ...... 11 0.5 Preparatory knowledge for this book ...... 13 0.5.1 Notation ...... 13 0.5.2 Linear components ...... 13 0.5.3 Independent sources ...... 14 0.5.4 Controlled or independent sources ...... 14 0.5.5 Kirchhoff’s current and laws ...... 14 0.5.6 Superposition...... 15 0.5.7 Th´evenin and Norton equivalents ...... 17 0.5.8 Linear networks and signals ...... 18 0.5.9 Fourier transformations ...... 19 0.5.10 Differential equations ...... 20 0.5.11 Circuit analysis methods ...... 21 0.5.12 Transfer functions ...... 24 0.5.13 Bode plots ...... 26 0.5.14 Calculations & mathematics ...... 28 0.5.15 Simplifying relations ...... 30 0.6 Solving exercises ...... 31 0.6.1 Verification using the answer manual ...... 33 0.7 And finally...... 34

1 Models 35 1.1 Components ...... 35 1.2 Analysing and modelling circuits ...... 40 1.3 Ideal model ...... 41 1.4 Diode models and time-independent circuits ...... 45 1.5 Diode models and time-dependent circuits ...... 47

2 Summary of physics 57 2.1 Introduction ...... 57 2.2 ...... 58 2.3 ...... 61

1 2 CONTENTS

2.4 Bipolar junction (BJTs) ...... 62 2.5 MOS-transistors ...... 65 2.5.1 MOS- in strong inversion ...... 68 2.5.2 MOS-transistor in strong inversion: summary ...... 69 2.5.3 MOS-transistor symbols ...... 71

3 Bias circuits 73 3.1 Introduction ...... 73 3.1.1 Biasing a transistor: the bias point ...... 73 3.1.2 Biasing a transistor: requirements for its bias point ...... 75 3.1.3 Biasing a transistor ...... 75 3.2 Biasing a BJT ...... 77 3.3 Biasing a MOS-transistor ...... 83

4 Small-signal equivalent circuits 87 4.1 Introduction ...... 87 4.2 Linear model for transistors ...... 88 4.2.1 SSEC of a BJT ...... 90 4.2.2 SSEC of a MOS transistor ...... 92 4.2.3 Small-signal parameters ...... 94 4.3 Amplifier circuits ...... 96 4.3.1 Coupling the input and output ...... 96 4.3.2 SSEC of a basic amplifier circuit ...... 98

5 Amplifier circuits 103 5.1 Introduction ...... 103 5.1.1 The common-base circuit, CBC ...... 106 5.1.2 The common-gate circuit, CGC ...... 108 5.1.3 The common-collector circuit, CCC ...... 109 5.1.4 The common-drain circuit, CDC ...... 111 5.1.5 CEC, CBC, CCC, CSC, CGC and CDC: a comparison .... 113 5.2 Cascade of multiple amplifiers ...... 114 5.2.1 ...... 115 5.2.2 ...... 115 5.2.3 Current mirror ...... 116

6 Feedback 119 6.1 Introduction ...... 119 6.2 Negative feedback ...... 120 6.2.1 Full negative feedback: a first concept ...... 121 6.2.2 Partial negative feedback: a generalised concept ...... 123 6.3 Negative feedback and amplifiers: some examples ...... 124 6.3.1 Effect of negative feedback on bandwidth ...... 124 6.3.2 Effect of negative feedback on interference and noise ..... 126 6.3.3 Effect of negative feedback on nonlinear distortion ...... 126 6.4 Stability ...... 127 6.4.1 Rough classification of systems with feedback ...... 128 CONTENTS 3

6.4.2 Stability of systems with negative feedback ...... 129 6.4.3 Stable and unstable: now what? ...... 129 6.4.4 Stability of systems with feedback: examples ...... 131 6.4.5 Phase and gain margin ...... 135 6.4.6 Positive feedback: peaking ...... 139 6.4.7 The Bode plot as tool for presentation ...... 140 6.5 Feedback and dominant first-order behavior ...... 142 6.5.1 Creating dominant first-order behavior ...... 143

7 The op-amp and negative feedback 145 7.1 Introduction ...... 145 7.2 Linear applications ...... 146 7.2.1 Non-inverting voltage amplifier ...... 146 7.2.2 Inverting voltage amplifier ...... 149 7.2.3 Virtual ground ...... 152 7.2.4 The integrator ...... 154 7.2.5 The differentiator ...... 155 7.2.6 Summation of currents ...... 156 7.2.7 Summation of ...... 157 7.2.8 Subtraction of voltages ...... 158 7.2.9 Filters ...... 159 7.3 Feedback with non-linear elements ...... 160 7.3.1 Logarithmic conversion ...... 160 7.3.2 Exponential converters ...... 161 7.4 Op-amp non-idealities ...... 162 7.4.1 Frequency-dependent gain ...... 162 7.4.2 First-order behavior and slew rate ...... 162

8 Positive feedback: oscillators 165 8.1 Harmonic oscillators with a low Q ...... 168 8.1.1 General introduction ...... 169 8.1.2 Wien bridge oscillator ...... 170 8.1.3 Phase-shift oscillators ...... 172 8.1.4 Startup conditions ...... 175 8.2 Harmonic oscillators with higher Q ...... 178 8.2.1 Single transistor oscillators ...... 178 8.2.2 Crystal oscillators ...... 188

9 Basic internal circuits for op amps 191 9.1 Introduction ...... 191 9.2 The input stage ...... 193 9.2.1 Symmetry requirement ...... 194 9.2.2 First implementation: large signal behaviour ...... 194 9.2.3 Second (or actual) implementation: large signal behaviour . . 196 9.2.4 Small signal behaviour ...... 199 9.2.5 Small signal behaviour with a non-ideal current source .... 201 9.3 From input stage to intermediate stage ...... 205 4 CONTENTS

9.4 Intermediate stages ...... 209 9.5 Output stages ...... 215 9.5.1 Requirements for the output stage ...... 215 9.5.2 Simple output stages ...... 216 9.5.3 Slightly less simple output stages ...... 219 9.5.4 Power efficiency aspects of output stages ...... 223 9.6 Frequency dependencies ...... 227 9.6.1 Bandwidth limitations: small signal ...... 227 9.6.2 Bandwidth limitations: large signal ...... 231

10 Introduction to RF electronics 233 10.1 Introduction ...... 233 10.2 Transmitting and receiving ...... 234 10.3 Maxwell ...... 237 10.3.1 Maxwell and Kirchhoff ...... 237 10.4 Introduction to antennae ...... 240 10.5 Dipole antennae ...... 248 10.6 Monopole antennae ...... 250 10.7 Other antenna characteristics ...... 251 10.8 A transmission system, a bit more exact ...... 252 10.9 In addition ...... 252 10.9.1 and maximum power transfer ...... 253 10.9.2 Fourier transformations, FFT and more ...... 254

11 Digital Circuits 257 11.1 Introduction ...... 257 11.2 Designing logical building blocks ...... 258 11.2.1 Basic logic ports ...... 258 11.2.2 The relation between “high” (digital) and “high” (analog) . . 258 11.3 Old solutions: DL, DTL and TTL ...... 260 11.3.1 Diode logic ...... 260 11.3.2 Diode-transistor and transistor-transistor logic ...... 260 11.4 NMOS and PMOS logic ...... 262 11.4.1 From TTL to NMOS ...... 262 11.4.2 The analog linear amplifier ...... 262 11.4.3 A digital (saturated) amplifier ...... 262 11.4.4 Arbitrary functions with NMOS logic ...... 264 11.4.5 The PMOS alternative ...... 265 11.5 The current solution: CMOS implementation ...... 266 11.6 The loading of a ...... 268 11.6.1 Comparing power consumption ...... 270 11.7 Choosing the supply voltage ...... 272 11.7.1 Scaling ...... 272 11.7.2 Transistor scaling and supply voltage ...... 273 11.8 Speed ...... 275 11.8.1 Definitions of parameters ...... 277 11.9 The latch ...... 278 CONTENTS 5

11.9.1 Signal retention ...... 278

Bibliography 281

Index 283 6 CONTENTS Chapter 0

Introduction

0.1 Electronics

In our everyday life, we are being surrounded by more and more electronic devices, containing more and more electronic circuitry. Because of this increase of electronics in existing devices, the applicability and usability increases drastically. Examples of existing devices which gained more functions are:

• TVs with wireless Dolby Surround sound and digital video enhancement

• radios with multi-channel audio processors and RDS

• computers with increasing calculation speeds

• car-electronics for controlling the engine, suspension, brakes, air conditioning, ABS, ESP and more

• cellular phones becoming pocket computers

• wireless Local-Area Network

• electronic control of more devices

• voice-controlled devices

Furthermore, there is an increasing number of “new” applications due to the new pos- sibilities provided by electronics. Very soon, electronic devices will have much more options than they have today. The new options will overshadow the current “new” options, making them look old and outdated1: What all these new applications have in common, is the fact that the computational power of electronics increases, while they become cheaper at the same time. Most of the “new” applications of electronics either enlarge the possibilities of the existing devices or increase their ease of use.

1Examples of the past 25 years are, among others, wireless telephones (GSM introduced in 1992...), the PC (introduced in 1982 with 4,77 MHz 8 bit CPU...), the CD (introduced in 1983), electronic motor- management, magnetic bankcards...

7 8 CHAPTER 0. INTRODUCTION

0.2 Electronic systems

In general, electronic systems process information “picked up” from the physical world (using a sensor) which transforms the physical information to the electronic domain. The physical information can be almost anything:

• temperature (your room, a combustion engine, a CPU, ...)

• light (fiber-optic connections, optical data-readers, CD, DVD, motion sensors, spectroscopy, ...)

• pressure (a switch, sound, weight, ..)

• electromagnetic waves (radio, GSM, ...)

• magnetic fields (like in conventional hard drives)

After the transition to the electrical domain, the electronics can process the informa- tion. After processing, the final electronic signal is converted into something physical, usually by an actuator. The physical quantity can, again, be anything: temperature, fan speed, light, sound, radiation, magnetic fields on a hard drive and many others. .

0.3 A general electronic system

A representation of an advanced electronic system is shown in figure 0.1. Clearly recognizable are the input and output signals of the system. These can be the signals from sensors and to actuators. Another “input signal” needed is the voltage2. Since the input and output signals are to and from the (analog) physical world, they are by nature analog. The analog character of audio, video and radio signals is clear. There are, how- ever, many analog signals which send binary data. Examples of this are the optical signal readers (for CD, DVD and optical LAN) and high speed binary data connec- tions, mostly found on PCs (USB2, USB3, firewire, AGP and PCI busses). Hence, analog signal-processing circuits are needed for the transformation to binary signals. In an advanced electronic system, many control functions and signal processing functions are performed in the digital domain. A number of functions are schemati- cally represented in the centre of figure 0.1. Two general representations of electronic systems are shown in figure 0.2. The difference between both systems is artificial: the top system is an analog electronic system, while the lower one is a mix between an analog and digital electronic system. Both systems are electronic, and both systems have its analog electrical input and output signals.

2Since the power supply voltage usually does not contain any information, it is not considered a “signal”. 0.3. A GENERAL ELECTRONIC SYSTEM 9

antenna, antenna memory RF AMP video, audio, …

audio, video, audio, video,… ADC CPU DAC actuator

high speed data, data I/O CLK external clock DSP

Power Management

external supply

Figure 0.1: Block schematic representation of an advanced electronic system

analog analog electrical electrical input signal output signal analog

analog analog electrical electrical input signal output signal analog DIGITAL analog

Figure 0.2: Block schematic representation of a general electronic system

Electronic systems can, in general, be subdivided in different electronic functions. These contain, among others:

• The amplification of input signals for further processing or to control an actuator (amplify).

For example, a radio or television receiver has an input, delivered by the cable, in the order of 10−5 W (10 μW), while controlling the display can easily take tens of Watts. The ability of a vacuum cleaner to move air is also in the order of tens of Watts, while flipping the switch takes about 10−5 W (10 μW). The power which is used to warm the air, is usually in the order of 1 kW.

• The analog manipulation of signals (filtering, mixing, ..)

In radio equipment, including mobile phones, signals are transmitted which are modulated on high frequencies. For example, when receiving such sig- nals, the amplified signal from the antenna is mixed to a lower frequency and then filtered. These analog operations take much less power (in the order of 10mW) than directly digitalising it using a high frequency AD- converter and mixing and filtering it digitally (power consumption is in the order of 1 kW). 10 CHAPTER 0. INTRODUCTION

• The digital manipulation of signals (filtering, editing, ...)

Many manipulations of signals are currently done in the digital domain. It can be proven that for signals which require a reasonable amount of accu- racy, it can be more efficient (with respect to power usage) to perform the manipulations digitally. Furthermore, digital manipulation allows for rel- atively easy signal processing which cannot easily be done in the analog domain. Digital processing can be done flexibly and reasonably inefficient (in terms of power usage) using a general CPU and custom software, or inflexibly and efficient on very specific data processors.

• Transformation of signals (AD and DA conversion, information extraction)

Extensive signal editing is usually done digitally. To do so, the analog sig- nals have to be transformed to the digital domain. Furthermore, something useful has to be done with the obtained signal, which means that it has to be transformed back to some analog quantity, using DA conversion. Also in detecting binary input signals, from for instance CDs or cable-connections, we need the familiar analog to digital conversion.

• The storage of signals (memory)

The blocks which perform an analog or digital “manipulation” can contain a num- ber of functions. For most analog and digital functions (a function in this context is an abstraction of ‘that what the system does physically’), it holds that the output signal has a higher power than the input signal. In addition to that, performing the manipulations themselves also takes energy. Obviously, the extra amount of power at the output and the energy used to perform the manipulations has to come from somewhere. In order to keep the process going the circuit has to be “fed”. The circuit of figure 0.2 has to be replaced by figure 0.3.

analog analog electrical electrical input signal output signal power analog supply

analog analog electrical electrical input signal output signal power analog analog DIGITAL supply

Figure 0.3: Block schematic representation of an electronic system with supply voltage

The power supply is usually a DC voltage. In many devices this DC voltage is provided by batteries. When larger amounts of power are needed, we usually reside to wall socket AC power. This alternating current then first has to be transformed to a nice DC voltage. 0.4. STRUCTURE OF THE BOOK 11

0.4 Structure of the book

Figure 0.4 shows the general structure of this book.

models and modelling, dealing with non-linear components

(chapter 1)

semiconductor components intro basic circuits - diodes - bipolar transistors - operation - MOS transistors - biasing

(chapter 2) (chapter 3)

small signal equivalent circuits

(chapter 4)

amplifying circuits

(chapter 5)

feedback

(chapter 6)

harmonic oscillators and negative feedback digital

(chapter 8) (chapter 7) (chapter 11)

introduction to RF-electronics basic internal circuits and transmitters for opamps

(chapter 10) (chapter 9)

Figure 0.4: Structure of this book

Chapter 1 covers models and modelling, or simplifications of reality, which are used to describe more complex, non-linear problems in an easy manner. Chapter 2 presents a short recap of semiconductor physics: the very basics of the electronic components used in this book: diode, bipolar transistors and MOS transistors. These semiconduct- ing devices are intrinsically non-linear. Although non- increases calculation difficulty, you need non-linear components in any sensible electronic circuit of system: (power)amplification fundamentally requires non-linear components. Basic amplifying circuits using bipolar junction transistors (BJTs) and MOS tran- sistors are introduced in chapter 3. Chapters 4 and 5 deal with modelling of these basic amplifying circuits: (large-signal)settings, small-signal equivalent circuits and more complex circuits are covered. Feedback around amplifying circuits is a very powerful tool that can be used to improve specific characteristics or to suppress unwanted effects. Feedback will be covered extensively in chapter 6. Issues such as stability and improvement of de- sired characteristics are covered. Elaborations and implementations of feedback are presented in chapter 7 for stable systems, and in chapter 8 for oscillating systems. 12 CHAPTER 0. INTRODUCTION

Finally, specific RF-issues such as transmission, antennas, reflections and other inter- esting matters are covered in chapter 9. For readability reasons,

Important conclusions are framed

Background information, or extra information is usually printed using a somewhat smaller font on a gray background.

Examples are also shown on a gray background, but with normal sized font.

An exception to this rule is all content in the remainder of this chapter, dealing with preparatory knowledge. This could have been typeset as background information but since it is considered essential, we have chosen to do not. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 13

0.5 Preparatory knowledge for this book

It is assumed that the reader has some basic knowledge of electronic circuit analysis and of mathematics. Below, a short recap is presented.

0.5.1 Notation This book uses a consistent notation for components and signals: see the table below.

Notation What it is Expression R a uR/iR C a QC /uC L an ΦL/iL Z an impedance can be anything r a resistance ∂uR/∂iR c a ∂QC /∂uC l an ∂ΦL/∂iL vX the total voltage in node X VX DC-voltage in node X vX vx voltage variation in node X vX − vX Vx amplitude of the voltage variation in node X vˆx f signal frequency in [Hz] ω angular signal frequency in [rad/s]

0.5.2 Linear components Simple electronic networks are built up from linear components; the element equations and impedance of these are listed below.

Component Value u − i-relation Impedance Unit u u resistor R = i i = R ZR = R Ω () Q · ∂u 1 capacitor C = u i = C ∂t ZC = jωC F () Φ · ∂i inductor L = i u = L ∂t ZL = jωL H () The symbols for the components above, as used in this book, are presented in figure 0.5a to c. Often a general impedance is used, rather than an impedance specifically for , or . In that case, the symbol for a resistor is used with a notation which illustrates the fact that it is an impedance: ZC , ZR, ZL or Zx.

+) +) Vv i -) -) a) b) c) d) e) f)

Figure 0.5: Linear components: a) a resistor or impedance, b) a capacitor, c) an inductor, d) a DC-voltage source, e) a voltage source and f) a current source. 14 CHAPTER 0. INTRODUCTION

0.5.3 Independent sources There are two basic types of independent sources: an independent voltage source and an independent current source. Usually, the term “independent” is dropped for sim- plicity. The voltage source forces a voltage difference across its connectors, independent of the current that will flow due to that voltage. Hence, a voltage source can either deliver or dissipate energy. In this book, we will encounter two different independent voltage sources: the DC-voltage source (shown in figure 0.5d) and the general voltage source (figure 0.5e). The current source, shown in 0.5f, forces, as the name suggests, a current through its nodes. No matter what. This book does not make any symbolic distinction between various current sources, DC, AC, independent or controlled.

0.5.4 Controlled or independent sources Circuits with amplifying components are usually modelled using controlled sources. We already know the controlled voltage and current sources, shown symbolically in figures 0.5e and f. The value of a source shows whether it is controlled or independent: a value IA corresponds with a DC-current source, while a value like s·vin corresponds to a controlled current source.

0.5.5 Kirchhoff’s current and voltage laws Kirchhoff’s voltage law (KVL) and Kirchhoff’s current law (KCL), formulated in 1845 by Gustav Kirchhoff, give elementary relations for electronic circuits3. The laws state, in short, that the total voltage drop in any mesh equals 0 V, and that no current can appear or disappear from nodes: mesh vn =0and node in =0. In essence, the current and voltage laws are nothing more or less than the two most basic laws of (simple) physics: the laws of conservation of matter and conservation of energy. As a short explanation: if you apply the law of conservation of matter to the par- ticles we call electrons, you obtain the current law of Kirchhoff: electrons do not disappear or appear at random and hence the summed current into any node is zero. Furthermore, electrons have some level of energy, which is expressed in electronvolts [eV]. In electronics, we usually work with a large number of electrons (a Coulomb), which results in the unit of [V]. Since electrons do not (dis)appear at random and energy does not either, the voltage drop in any mesh must equal 0 V.

3These laws are valid if there is no electromagnetic coupling into or going out of the circuit. Taking electromagnetic effects into account was done later by Maxwell, which is nowadays very relevant for RF-electronics and EMC-problems. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 15

0.5.6 Superposition In any circuit, the voltage on a node (or the current in a branch) is resulting from the contribution of all sources in that circuit. However, calculating the voltage at some node in a circuit due to all sources simultaneously can be a lot of work. With linear circuits, a voltage or current can be calculated much less cumbersome by calculating the contribution of every independent source separately and finally sum- ming all these contributions. This method is called superposition; it is one of the most powerful tools available for analysis. The underlying idea is that a com- plex problem is separated into small problems in a very efficient way4. A good example of a circuit that can be easily analyzed using superposition, but very difficult without superposition, is the R-2R-ladder circuit, shown in figure 0.6.

RRR +)

2R 2R 2R 2R 2R vOUT

v1 v2 v3 v4

-)

Figure 0.6: An R-2R-ladder circuit: an example where superposition is extremely useful.

The output voltage as a function of the four independent sources is easily obtained if we calculate the separate contributions of all the independent sources. For the given circuit, we would have to do this four times, using the circuits presented in figure 0.7a-d. From this follows: 1 v (v1)= · v1 OUT 16 1 v (v2)= · v2 OUT 8 1 v (v3)= · v3 OUT 4 2R 1 v (v4)= · v4 = · v4 OUT 2R +2R 2 v4 v3 v2 v1 v = + + + OUT 2 4 8 16 Verifying this can be very easy; the simplest way is to simplify the circuit stepwise. Example: the circuit of figure 0.7d is simplified if we take the left (2R//2R) and replace it by a single R, and then replace (R+R) by a single 2R. This results in the circuit in figure 0.7g. For the replacement circuits of figures 0.7b and c, the same can be done, which results in figures 0.7e and f. Using this example, we clearly show that a ”divide and conquer”-strategy results in many possible simplifications, ultimately reducing the amount of cumbersome calcu- lations.

4This principle was already invented by Philippus of Macedonia, around 350 BC, by the motto ”divide et impera”, although it was not applied to electronic circuits at the time. 16 CHAPTER 0. INTRODUCTION

RRRRRR +) +)

2R2R2R2R 2R 2R 2R 2R 2R 2R vOUT vOUT

v1 v2 v3 v4 v1 v2 v3 v4

-) -) a) b)

RRRRRR +) +)

2R2R2R2R 2R 2R 2R 2R 2R 2R vOUT vOUT

v1 v2 v3 v4 v1 v2 v3 v4

-) -) c) d)

RR R +) +) +)

2R 2R 2R 2R 2R 2R 2R 2R 2R vOUT vOUT vOUT

v2 V3 v4

-) -) -) e) f) g)

Figure 0.7: An R-2R-ladder circuit: separated in 4 pieces, a, b, c and d. The last 3, a simplified version suffices in respectively e, f and g.

Advanced superposition In most textbooks, superposition is formulated only for independent sources and it may appear that it does not hold for dependent (controlled) sources or for circuits that contain dependent sources. This is wrong! In the analysis of circuits, you can calculate the contribution of any dependent source exactly the same way you’d do it for an independent source. The trick is that at some stage — preferably at the end of the calculations — you have to define the dependent voltage or current for the controlled sources. It does not matter at all whether this value is independent or dependent. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 17

0.5.7 Thevenin« and Norton equivalents The electrical behavior of every linear circuit can be modelled as a single source and a single impedance. This can easily be explained if we use the definition of linear circuits: the electrical character has to be described with a linear function, which in its turn can be uniquely described with just two points. In linear circuits, it is convenient to choose two points where the load is Z =0Ωand Z →∞Ω. In words: calculate the open terminal voltage of a (sub)circuit, without any load, and calculate the current which would flow if you short circuited the circuit, and the two convenient points are determined. From this, a simple model can be constructed with just one source and one impedance. If the equivalent uses a current source, we call it a Norton equivalent, while a model with a voltage source is called a Th´evenin equivalent. Both are named after their discoverers, respectively in 1883 [1] and 1926 [2]5.

Z1 Z3 +

v Z2 Z4 i - a)

ZEQU +

vEQU iEQU ZEQU -

b) c)

Figure 0.8: A random linear circuit with its Thevenin« and Norton equivalents

The circuit in figure 0.8a has its Th´evenin and Norton equivalents shown in, respec- tively, figures 0.8b and c. The open circuit voltage and short circuit current for this example are:

Z4 Z2//(Z3 + Z4) vopen = −i · (Z4//(Z3 + Z1//Z2)) + v · · Z3 + Z4 Z1 + Z2//(Z3 + Z4) 1 Z3//Z2 ishortcircuit = −i + v · · Z3 Z1 + Z3//Z2

According to Ohm’s law, the following equivalent circuits hold:

vEQU = vopen

iEQU = ishortcircuit vopen ZEQU = ishortcircuit

5The equivalent with a voltage source is called the Th´evenin equivalent, although Helmholtz published the same theory 30 years earlier. The work of Helmholtz, however, did not receive any recognition. 18 CHAPTER 0. INTRODUCTION

0.5.8 Linear networks and signals A linear network consists of linear components: resistors (with a instantaneous linear relation between voltage and current), capacitors and inductors (with an integral or differential relation between v and i). The input source can either be a current source or a voltage source. One of the best characteristics of a linear circuit, is the fact that the input signal emerges undistorted at the output. At first, this might seem strange: if we put a square wave in a linear circuit, we generally do not get a square wave at the output. The reason is that the input signal can be viewed as a series of signals that remain undistorted, but that may get a different phase or amplitude, see 0.5.9 for a discussion on this topic. The types of signals where the output signal is a shifted and scaled version of the input signal s, are those that satisfy the following mathematical relation:

∂s(t) ∝ s(t + τ) ∂t

Signals that satisfy this are sin(ωt + φ) and e(a+jb)·t, in other words: harmonic and exponential signals. Euler has shown [4] that these two types of signals are related6: here ejb·t is a rotating unit vector in the complex plane with angle bt. The represen- tation of this on the real axis is cos(bt), while the imaginary part is j · sin(bt). From this, it follows that:

e(a+jb)t = eat · (cos(bt)+j · sin(bt)) ejωt − e−jωt sin(ωt)= 2j ejωt + e−jωt cos(ωt)= 2 With this new knowledge, it is also very easy to deduce the impedance of reactive elements. For example, for a capacitor it follows (based on a harmonic signal): v Z = C i ∂v i = C · ∂t ∂V · sin(ωt) = C · c ∂t = C · ω · Vc · cos(ωt) ◦ = C · ω · Vc · sin(ωt +90 ) sin(ωt) 1 Z = = C C · ω · sin(ωt +90◦) jωC

6The proof of this is remarkably simple if we take the Taylor expansions of an exponential function 2 3 2 4 6 x =1+ + x + x + ( )=1− x + x − x + and of a sinus and cosinus: e x 2! 3! ..., cos x 2! 4! 6! ... and 3 5 7 ( )= − x + x − x + 0 =1 1 = 2 = −1 3 = − sin x x 3! 5! 7! .... If we remember that j , j j, j and j j then it immediately follows that ejx = cos(x)+j · sin(x). 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 19

0.5.9 Fourier transformations The basic signals used to analyze linear circuits — the sinusoidal functions — have a close correlation to Fourier analysis. Fourier stated [5] that every periodic signal f(x) can be written as an infinite sum of harmonic signals:

f(x)=a0 + a1 · cos(x + φ1)+a2 · cos(2x + φ2)+... Using a number of goniometric relations, the Fourier transformation of a signal is obtained. The relevant equations here are: 2π 2π sin(x)dx =0 and cos(x)dx =0 0 0 b a · cos(x)+b · sin(x)= a2 + b2 · cos(x − atan( )) a 1 1 sin(x) · sin(y)= cos(x − y) − cos(x + y) 2 2 The first two relations state that the average of an harmonic signal equals 0. The third relation states that the sum of a sine and a cosine with the same argument can be written as one harmonic function with that argument and a phase shift. The fourth relation is crucial: the product of two harmonics equals the sum of two harmonics, one with the difference between the arguments, the other with the sum of the arguments. From the first three relations, it immediately follows that if a periodic signal with angular frequency ω can be written as the sum of harmonics, then those harmonics must have angular frequencies which are an integer multiple of the angular frequency of the original signal. Now, a new relation can be written:

f(ωt)=a0 + a1 · cos(ωt + φ1)+a2 · cos(2ωt + φ2)+... th Notice that the a0-term corresponds to the 0 harmonic, or in fact the a0 · cos(0) term. The above relation can already be used to perform Fourier transformations: all an terms and all φn factors would have to be determined. However, in general, rd determining the φn factors can be very difficult. Using the 3 goniometric relation, we can simplify this process. This gives us the most widespread Fourier formula:

f(ωt)= a0 + a1 · cos(ωt)+b1 · sin(ωt)+a2 · cos(2ωt)+b2 · sin(2ωt)+... From the fourth goniometric relation, together with the first two, the relation to deter- mine an and bn can be derived quite easily: 2π 2π 1 sin(x) · sin(x)dx = cos(x) · cos(x)dx = · 2π = π → 2 0 0 1 2π 2 T a = f(x) · cos(x)dx = f(ωt) · cos(ωt)dt n π T 0 0 1 2π 2 T 2π 1 bn = f(x) · sin(x)dx = f(ωt) · sin(ωt)dt T = = π 0 T 0 ω f The Laplace transformations are closely correlated to Fourier transformations: one of the most important “differences” is the use of ejx instead of sin(x) and cos(x).In this book, the Laplace and Fourier transformations are not explicitly used; the most important thing is to realize that every periodic signal exists of harmonic components. 20 CHAPTER 0. INTRODUCTION

0.5.10 Differential equations Something different which is closely related to the basic signals in linear systems are differential equations and their solutions. Usually, it is very convenient to analyze circuits in the frequency domain using complex impedances. In order to do so, the circuit must be linear. Electronic circuits usually satisfy this condition, or are modelled as such in order to be able to use complex impedances and frequency domain analyses. However, not all circuits can be linearized. In those cases, it might not be allowed to use complex impedances and the original element equations must be used which can only be analyzed in the time domain. This usually gives a differential equation. Below is a short summary for 1st and 2nd order differential equations: dx B · + C · x = D dt dx2 dx A · + B · + C · x = D dt2 dt It is evident that the resulting signal x(t) has a derivative which has the same shape as the signal itself, i.e. either exponential or harmonic. The exponential form is the most general one, and thus is used most of the time. The easiest solving method7 is to substitute the most general form and solve the missing parameters for a homogenic solution:

x(t)=Xea·t C aB · Xea·t + C · Xea·t =0 → a = − B √ −B ± B2 − 4AC a2A · Xea·t + aB · Xea·t + C · Xea·t =0 → a = 2A As you can see, there is just one solution for first-order differential equations, and two for second-order differential equations. (And yes, three for a third order differen- tial equation.) These two solutions can be complex, in which case an (exponentially increasing or decreasing) harmonic solution results: Xe(a+jb)t + Xe(a−jb)t = Xeat · ejbt + e−jbt =2Xeat · cos(bt)

When there are only real solutions, the output is the sum of two exponential functions. The particular solution, where D is also implemented, has to be solved next. This usu- ally takes some tricks8. From all initial conditions, the rest of the missing parameters can be determined.

7A different, simple, solution for first-order differential equations is separation of variables and inte- gration. 8 Tricks or knowledge. If D is a constant, the particular solution xparticular = constant can be tried. The same goes for D = sin(ωt) where xparticular = A · sin(ωt)+B · cos(ωt) can be tried. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 21

0.5.11 Circuit analysis methods A number of circuit analysis methods for (linear) electronic circuits are well known. The most common methods are the nodal analysis and the mesh analysis; in this book, we will mostly be using the brute force approach. All these methods are very system- atic, and while the first two are very well suited for implementation in software, the third method gives more insight (although it is difficult to automate in software).

• While using the nodal analysis, you calculate the total current in every node. According to Kirchhoff’s current law, this current must be zero for every node. For easy calculation of these currents, passive components are replaced with admittances instead of resistances (impedances) and only current sources are used. For voltage sources, a Norton equivalent is used. If this method is performed properly, a network of N nodes gives a set of N independent equations, which can be solved to give all voltages.

g3

g2 g5 n1 n2 n3

J1 g1 g4 g6

Figure 0.9: Example network for node analysis.

v1 · (g1 + g2 + g3) − v2 · g2 − v3 · g3 = J1 −v1 · g2 + v2 · (g2 + g4 + g5) − v3 · g5 =0 −v1 · g3 − v2 · g5 + v3 · (g3 + g5 + g6)=0

Solving this set of equations can be done by hand quite straightforwardly, for example by using Gaussian elimination. This set of equations can also be solved easily in software. For that, usually the set of equations is written in matrix- form: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ g1 + g2 + g3 −g2 −g3 v1 J1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ −g2 g2 + g4 + g5 −g5 v2 = 0 −g3 −g5 g3 + g5 + g6 v3 0 Solving this equation numerically can be done easily using matrix inversion. Matrix inversion in software is usually implemented via LU-decomposition, Gaussian elimination and backward substitution. You can also do it by hand, which is the boring non-insightful method you’re taught to do. Sorry to inform you about that.

• The mesh analysis calculates the total voltage within a mesh. From Kirchhoff’s voltage law, we know that this summed voltage must be equal to 0V . Just as 22 CHAPTER 0. INTRODUCTION

with the nodal analysis, a set of equations is formulated, which has to be solved. Since the mesh analysis uses voltages, we must replace every current source with its Th´evenin equivalent scheme. The circuit in 0.10 then gives:

Z3

m2 Z1 Z2 Z5 n1 n2 n3 +

E1 Z4 Z6 m - 1 m3

Figure 0.10: Example network for mesh analysis.

i1 · (Z1 + Z2 + Z4) − i2 · Z2 − i3 · Z4 = E1 −i1 · Z2 + i2 · (Z2 + Z3 + Z5) − i3 · Z5 =0 −i1 · Z4 − i2 · Z5 + i3 · (Z4 + Z5 + Z6)=0

or equivalently ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ Z1 + Z2 + Z4 −Z2 −Z4 i1 E1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ −Z2 Z2 + Z3 + Z5 −Z5 i2 = 0 −Z4 −Z5 Z4 + Z5 + Z6 i3 0 This can again be solved with Gaussian elimination and backward substitution: fancy terms for simply working in a systematic manner to solve a set of linear equations. Just as with the nodal analysis you can do it by hand, but a computer is much better at it. To make it worse, you probably do not get a lot of insight from doing these matrix inversions by hand. • The brute force approach subdivides the problem in a systematic manner, un- til every small sub-problem is not a problem anymore. Substituting every- thing back gives the desired answer. The method uses (for electronic circuits) Kirchhoff’s voltage law, Kirchhoff’s current law, and the element equations, whichever comes in handy at that instant. In a circuit with Nm voltage meshes, Nk nodes and Nc electronic components, obtaining the answer takes a maximum of (Nm + Nk + Nc) derivation steps, and another (Nm + Nk + Nc) substitution steps. The first equation that should be written down equals the desired answer. If a voltage transfer H is requested, then the first statement is a small elaboration (or specification) of the question itself: v H = OUT vIN Next, every unknown on the right hand side of the relation must be solved using KVL, KCL or an elementary equation. Here, multiple approaches are available 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 23

depending on the choices made. It is important to recognize that all variables for which an expression has already been derived (and hence, is on the left side of the “=”-symbol) is known. For the transfer of the voltage from E1 to vn2 in figure 0.10, we will get, for example:

v 2 H = n vE1 vn2 = iz4 · Z4 (EE)

iz4 = iz2 + iz5 (KCL) vn1 − vn2 iz2 = (EE) Z2 vn3 − vn2 iz5 = (EE) Z5 vn1 = vE1 − vz1 (KV L)

vn3 = iz6 · Z6 (EE)

vz1 = Z1 · (iz2 + iz3)(EE)

iz6 = −iz5 + iz3 (KCL) vn1 − vn3 iz3 = (EE) Z3

Substituting these equations from the bottom up, gives the desired relation. It seems like a lot of cumbersome work, but other methods need just as much (or even more) effort. Below, a portion of the substitution is presented. While calculating the vn3, we get an expression which is a function of vn3 again. This means that there are loops in the circuits: feedback paths from the output of your circuit to the input, for example. The only correct method to continue, is to separate the variables, as shown below9.

vn1 − vn3 iz6 = −iz5 + Z3 vn1 − vn3 vz1 = Z1 · (iz2 + ) Z3 vn1 − vn3 vn3 = −iz5 · Z6 + · Z6 ⇐⇒ (separate variables) Z3 vn1 vn3(1 + Z6/Z3)=−iz5 · Z6 + · Z6 ⇐⇒ Z3 Z6Z3 vn1 vn3 = −iz5 · + · Z6 Z3 + Z6 Z3 + Z6

As a next step the other variables must be calculated, requiring some rewrit- ing. Smaller circuits or circuits without loops (here, with Z3), are much less work. The brute force approach will be used for small circuits within this course. Larger and more complex circuits will be divided in subsystems and calculated

9The other method is recursive, with as associated problem that that method never ends. 24 CHAPTER 0. INTRODUCTION

one at the time (or will not be analyzed at all).

vn1 = ...

iz2 = ...

iz5 = ...

iz4 = ...

vn2 = ... H = ...

The nice thing about this brute force approach is the fact that you are working to- ward solving for one a specific answer, while your method is based on the divide and conquer method. With the brute force approach a complex problem — e.g. calculating some transfer function or impedance — is divided into many very simple problems — e.g. element equations, KCL, KVL — which are combined to get the complete answer. Another positive aspect of the brute force approach is that — as you gain more experience in this type of approach — the gained knowledge allows for quicker analysis.

0.5.12 Transfer functions In electronics, we often search for a relation between the input signal and something which is a consequence of that signal. Usually this consequence is an output signal, meaning that we often have to find a transfer function. Other common relations are (among others) the input and of an electronic circuit:

signal H(jω)= out signalin vin Zin = iin vout Zout = iout To analyze, sketch or interpret these transfer functions or impedances it is usually convenient to rewrite the original function as (a product or sum of) standard forms. There are several standard forms; for a low-pass-like transfer function, we have:

· 1 H(jω)=H(ω0) ω j ω0 · 1 H(jω)=H(0) ω 1+j ω0 1 = H(0) · 1+jω · τ0 1 H(jω)=H(0) · ω 2 ω2 1+j · + j 2 ω0 Q ω0

The first form corresponds to an integrator, which is just a limit case of the second form. The second and third forms are identical, and have a first-order characteristic; 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 25 the fourth mode has a second-order characteristic. High-pass characteristics can be obtained from low-pass functions, using: jω → ω0 ω0 LP jω HP From this it follows that: ω H(jω)=H(ω0) · j ω0 j ω ∞ · ω0 H(jω)=H( ) ω 1+j ω0 jωτ0 = H(∞) · 1+jωτ0 2 ω2 j ω2 H(jω)=H(∞) · 0 ω 2 ω2 1+j · + j 2 ω0 Q ω0 The order of any transfer function is simply equal to the highest power of ω. Every normal transfer function, of arbitrary power, can be written as the product of first and second-order functions. Knowing the three basic standard forms for low-pass characteristics by heart and being able to do some basic manipulations pretty much covers everything you will ever need to visualize transfer functions of impedances as a function of frequency. 26 CHAPTER 0. INTRODUCTION

0.5.13 Bode plots

A Bode plot is a convenient method for presenting the behaviour of a (linear) circuit; this is done by plotting the magnitude and phase shift of a transfer function as a func- tion of the frequency. Here, the magnitude and frequency are plotted on a logarithmic scale, which proves to be very convenient10. Before we dive into Bode diagrams, we first repeat a number of mathematical logarithmic rules:

log(x)+log(y)=log(x · y) log(xy)=y · log(x) | ≈ log(x + y) x<

In words:

• the product of two values on a logarithmic scale equals the sum of those two values.

• a relation x = yz on a log-log scale gives a straight line with a slope of z; where z can be any real number.

• the sum of 2 values, as approximation, equals the largest of the two values on a logarithmic scale. Only if these numbers are more or less the same size, this rule does not apply.

To calculate the argument of a (complex) transfer function, the known rules for work- ing with complex numbers are used. For example, the standard form of a first-order · 1 low-pass characteristic, H(jω)=H(0) 1+j ω gives: ω0

• for ω<<ω0, a transfer function almost equal to H(0), with a phase shift of 0o.

−1 • for ω>>ω0, the transfer function is almost H(0)ω0 · ω , with a phase shift of −90o. √ • for ω = ω0, the transfer function equals H(0)/ 2, with a phase shift equal to −45o.

The modulus of this transfer function — as a function of frequency — can be approxi- mated by a constant at low frequency, and a straight line with a slope equal to -1 at high frequencies, when both magnitude and frequency are plotted on a logarithmic axis. The phase at low frequencies equals 0o,is−45o at ω = ω0 and approaches −90o for high frequencies. When plotted on a linear phase-axis and a logarithmic frequency axis this results in an S-shaped curve. The thick curves in figure 0.11b give the modulus of the first order transfer func- tion (as a function of frequency) in a log-log plot. The asymptotic approximation, as described above, is given by the two dashed lines. The phase characteristic as a func- tion of the frequency is given in figure 0.11c; obviously, this is presented on a lin-log 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 27

nd 2 order, Q=2 2H(0) nd 2 order, Q=1 |H(jw )| nd 2 order, Q=0.4 H(0) st 1 order

0.1w w 10w w (log) a)

0

H(0) arg(H(jw ))

|H(jw )| -/2p 0.1H(0)

p 0.01H(0) - 0.1w ww10w 0.1w 10w w (log) w (log) b) c)

Figure 0.11: The Bode plots of a first-order and 3 second-order transfers: a) modulus of the transfer on a lin-log plot: wrong and too little detail for low H(jω) b) the correct modulus plot, on a log-log scale c) the correct phase plot, on a lin-log scale.

scale. For comparison, figure 0.11a gives the modulus as a function of frequency on a lin-log scale, which clearly shows that for small moduli many details are lost. Figure 0.11 also gives the modulus and phase characteristic (which together form the Bode diagram) for second-order transfers. From the standard mode of a second-order 1 low-pass transfer, H(jω)=H(0) · 2 immediately follows: 1+ ω + 2 ω j · j 2 ω0 Q ω0

• for ω<<ω0 the transfer is almost H(0), with a phase shift of 0o.

−2 • for ω>>ω0 the transfer is almost H(0)ω0 · ω , with a phase shift of −180o.

• for ω = ω0 the transfer is H(0) · Q, with a phase shift of −90o.

The modulus characteristic is, on a log-log plot, easily drawn asymptotically where some extra attention has to be paid to the modulus for ω = ω0. Other transfer functions can easily be constructed using the previously stated mathematical rules.

10Obviously, there are numerous other methods, some of these will be covered later on in this book. 28 CHAPTER 0. INTRODUCTION

0.5.14 Calculations & mathematics As every sane human knows, calculations (or mathematics for more complicated cal- culations) is a necessity for describing something in an exact manner. Without calcu- lations, there would only be vague statements like “if I change something here, then something changes over there” or “if I press here, it hurts there”. Those statements are completely useless! As in any sensible scientific field, in electronics we like to get sufficiently exact relations that are described in an exact language: mathematical terms. In the past (no guarantee for the future!), it appeared that many students made errors in basic calculations, in mathematic manipulations and in basic calculation rules. To refresh some basic math knowledge, this section reviews some of the most basic math rules.

The basics The basis of almost all math is the equation, or a “=” with something on one side, and something else on the other side. What those somethings are, is not of importance, but I know for a fact that the two somethings are equal to each other in some way. These days, in elementary school, students do math with apples, pears and pizzas: 1 1 pizza + pizza = pizza (0.1) 2 2 Nonsense! Even if you would assume all pizzas to be of exactly the same size, shape and appearance (ingredients and their location), it would still depend on how you slice the pizza in half. It is possible to slice a pizza in half in, more or less, ∞ different directions, and if I cut the half pizzas in (0.1) in two different directions from full pizzas, then there is no way the two of them will be one complete pizza again, although it is suggested by (0.1)11. In electronics, our job is much easier: we use (integer) numbers of electrons, (a real number of) electrons per second or (real) energy per electron: with charge, or current and voltage. We might possibly add flux if we are talking about inductors, but the physics gets a bit more complicated since we would have to take relativity and Einstein in to account. In general, we are dealing with matters that can easily be added, subtracted, divided and multiplied. The basics for doing math with these terms is simply the equation: something = something (0.2) often written in a somewhat different form:

somethingform1 = somethingform2 (0.3)

It clearly states that the part left of the “=”-symbol is equal to the part on the right. More specific: its magnitude is equal, not its form. Often, we would like to rewrite the equation to have something simple on the left (we “read” from left to right) which is

11Yes, if you take the two halves from the same pizza it would still be incorrect, since you would have two halve pizzas with a cut in it. If you thought that it is the same, think about two halves of bicycle tires, two half legs or two half glasses. It is not a smooth ride, does not walk very comfortably and you can’t drink beer from it. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 29 understandable (monthly pay, speed, impedance, ...) and a form on the right with all other variables. This is what is called an equation or relation: if you change something on the right hand side, something also changes on the left hand side, and vice versa. Such mathematical relations give the relation between different parameters and are very valuable in analyses and syntheses12.

Basic rules The most basic rules for relations are:

something = something something · somethingelse = something · somethingelse something + somethingelse = something + somethingelse something = something · 1

These rules do not appear to be very difficult, but in fact they are. Specifically the last rule appears to be very difficult, since what is this factor 1? A “1” can be written in numerous different ways: an infinite number of ways in fact. From the first two rules surely follows:

something1 1= something1 something1 something = something · something1 and choosing a convenient factor something1 takes some skills. It requires you to know what you want to know. But you should have already formulated that in §0.6, so it should not be a problem.

Basic math rules In addition to the basic rules above, it is also assumed that the basic mathematical rules for exponential functions are known and can be applied by you. Also, the derivatives of some basic functions must be known by heart. If you remember how esomething and harmonic signals (sine and cosine) are related, then you have enough knowledge to start off in this book. If you have some skill in manipulations with equations, can work in a structured way, have some perseverance and some confidence in yourself, then you should be just fine!

12Many people wrongly call “relations” “formulas”; please don’t! A formula is a recipe where you put some numbers in, and you get another number back. There is a reason why they are used in fairytales and other falsehoods: it’s because they like to keep things vague and unclear. A relation gives a (causal) connection between parameters and hence gives more information and can be applied to a much wider scope than a formula. 30 CHAPTER 0. INTRODUCTION

0.5.15 Simplifying relations In this book relations will be derived frequently: mostly for impedances and transfer functions. Relations will be derived, since these relations will help you to analyze, understand, optimize and synthesise things that are impossible with numerical meth- ods. While deriving these equations, it would come in handy if you have some skill in simplifying equations. Simplifying equations comes down to just a small number of basic tricks: • multiply by 1 • the equation ”something=something” The challenge is in choosing the correct 1. The transfer of a voltage divider consisting out of a capacitor and a resistor, could for instance be: 1 jωC H(jω)= 1 jωC + R which is a pretty ugly expression, which can become more understandable with a mul- tiplication of 1. If you choose the correct 1, that is... 1 jωC jωC 1 H(jω)= 1 · = (well chosen 1) jωC + R jωC 1+jωRC 1 1−ea 1 − ea H(jω)= jωC · = jωC (not so well chosen 1) 1 − a 1−ea − a jωC + R 1 e jωC + R(1 e ) Often, in larger circuits and systems, a signal might be a function of itself. Simpli- fying those relations comes down to choosing a useful ”something” in the equation ”something=something”. For instance, the relation y = a · x + b · y looks nothing like a closed expression if you want to know y. The solution is obviously “separation of variables”, a trick which comes down to adding “something=something” to something13. A well chosen “something=something” gives: y − b · y = a · x + b · y − b · y (something = −b · y) y · (1 − b)=a · x Simplifying even further can be easily done by multiplying with a well chosen ”some- thing=something”, like: 1 1 1 y · (1 − b) · = a · x · (something = ) 1 − b 1 − b 1 − b a · x y = 1 − b Hence, in order to simplify a relation, it is of great importance to know the multiplica- tion table of 1 by heart and to be able to use the equation 1=1. This seems easy, but it usually proves to be very difficult.

13If you add the relation something=something to something, you are obviously actually adding noth- ing. 0.6. SOLVING EXERCISES 31

0.6 Solving exercises

Most problems can be tackled in the same general way. The method may seem overly logical, resulting in steps being skipped, which in its turn leads to incorrect results and more work for you.

1. Understand the question, then try to specify it. For example, if you are asked for an output impedance, start writing something like:

zout =?

or ∂vOUT vout zout = = =? ∂iOUT iout This gives you a clear direction for the elaboration. You can also check after- wards whether you have actually calculated what you wanted to know.

2. Make a drawing/schematic where all relevant items for this specific problem are presented. Leave out everything that is not important. You might need multi- ple drawings / schematics to obtain a final version. Putting something together quickly usually yields incorrect results or causes unnecessarily complex calcu- lations.

3. Work in a structured manner towards the answer. This can be done in several ways, some of which will be presented in this book.

4. Verify your answer:

• Check whether it is actually the answer to the question. • Check whether the dimensions (units) agree. If the dimensions are correct,

then it might be the correct answer. Example: V / 1+ ·As/ (1+ · ) A A V Ro id Cin zout = 1+j ω must be wrong: Ω = 1+1 ω0 ⎛ ⎞

V / ⎝1+ A ⎠ 1+ id A Ro · V 1/ ·As/ vin jω Cin s V zout = 1+j ω can be correct: Ω= 1+1 ω0 • Check whether the equation passes the ‘test of extremes’ (fill in an extreme value which simplifies the problem, and reason whether or not this could be correct). Often 0 and ∞ are useful extremes. If you use the previous answer:

zout (ω =0)→∞and zout (ω →∞)=0

In addition, there are a number of issues which are useful in general. The exercises of any course can always be solved; you don’t have to worry whether or not you have enough parameters. Usually, there are too many parameters given within one exercise, just to confuse students (e.g. for the teachers’ fun). In more complicated assignments, it is not always evident that you have enough data to actually calculate something. Before you start calculating, it might be useful 32 CHAPTER 0. INTRODUCTION to validate whether or not you are actually capable of calculating something in the first place. A possible method for this is to use the fact that you need n independent equations to solve n variables. If you have less equations: be smart. If you have more equations or conditions: comprimise! Furthermore, it is always useful to work with variables, instead of numbers. The reason here is that you make less mistakes, the answer can be verified (see below) and if you make a mistake it does not compromise the entire assignment (read: your grade). When you work with variables, also work with the divide and conquer strategy. Divide your problem in subproblems and add them all together in the end. For exam- ple, a voltage divider with an RC and RL linked in parallel written as three separate formulas v = v ZRL with Z = L//R = jωRL and Z = R out in ZRC +ZRL RL R+jωL RC 1+jωRC and then substituting everything gives much less calculations rather than calculating everything14. If you follow points 1 to 3, then you are able to solve just about any problem, electronic or otherwise. Point 4 is to verify your answer. By now, you might be wondering why there is nothing stated here about verification using the answer manual.

14This can also be shown scientifically for systematic methods like the node analysis. If you use Gaus- 3 3 sian elimination, it takes in the order of n (hence O(n )) calculations to solve the system of equations. 2 · ( n )3 +23 Separating the original problem into two smaller problems, takes only O 2 manipulations. Subdividing the problems in smaller subproblems that consist out of about 2 calculations or components is optimal. This subdividing is inherently woven in the brute force approach. If you wish do to as little work as possible, always divide the problem into smaller problems, which you solve independently. From there, you can easily construct the original problem again, but now including the answer. 0.6. SOLVING EXERCISES 33

0.6.1 Verification using the answer manual An answer manual is for most students (obviously not for you) useless, since it is typically used the wrong way: the elaboration is read along with the exercise which makes many students conclude that they would have been able to solve it themselves. However, actually being able to solve the problem and to understand the solution are two entirely different concepts15. The correct way to handle an answer manual, is:

• follow the four easy steps listed on the previous two pages

• After step 1:

Ð “I do not know the answer”: great! If you would have known the answer immediately, it would have been a terrible question, since it would be too easy or focused on simply remembering stuff. Ð after 5 minutes of puzzling: first mutter how difficult this question is, then be happy that you are given the opportunity to learn something.

• After step 2-3 (and possibly 4):

Ð solving the problem all by yourself (after puzzling, reading, puzzling, thinking, trying, >30 minutes). If you have not succeeded, ask some- one (quite possibly a tutor) an intelligent question: what are you stuck at exactly?

• When done:

Ð if you are confident that your answer is correct then verification using the answer book is not necessary since you are confident. The only use would be that you see that your answer could have been written in another form. Since any relation can be written in an infinite number of different forms this is quite useless. Ð if you are not confident that your answer is correct, then verification is senseless because you would have had to continue until you’d had the con- fidence.

In essence, every answer manual is useless; Herman Finkers already stated “stories for in the fireplace”16 17. In the end, the only correct way of using an answer manual is not using it at all.

15This is the reason you never get the answers to your exam during the exam, with a sheet of questions with something like: Assignment 1. Tick the correct answer:  I could have made this assignment myself  I could not have made this assignment myself 16Freely translated. 17H. Finkers, “Verhalen voor in het haardvuur”, ISBN: 9789060053973, Rap: 2003 34 CHAPTER 0. INTRODUCTION

0.7 And finally...

Some useless knowledge always comes in handy. If you have ever wondered about e: 1 x e = lim 1+ x→∞ x

More nonsense: for non-linear effects, you usually have some xb and you have to do something with it. For harmonic distortion calculations you would then need some- thing like sinn(x) which is not that easy to use. Luckily, Euler has told us many things, among which: 1  cos(x)= ejx + e−jx 2 Using the binomium: n n (x + y)n = xn−kyk k k=0 This binomium is nothing more and nothing less than counting all possibilities to ob- tain a specific power. For instance, (x + y)4 is the same (equals to) (x + y) · (x + y) · (x + y) · (x + y) and there is only one way to get x4: multiply all x’s within parenthesis with each other. To get to x3y, there are 4 ways to change one x into a y: plain combinatorics. Oh right, cosn(x) is: 1  n cosn(x)= ejx + e−jx 2 n 1 n = · ejx + e−jx 2 1 n n n = · · ejx·(n−k)e−jkx 2 k k=0

Just take Euler by the neck and tadaa, you can rewrite any cosn(x) to a series of higher harmonic components. You just might need it sometimes. More useless information: now that we are talking about the binomium: you can easily use it to see that the derivative (with respect to x) of a term a · xp equals ap · xp−1:

∂a · xp a · (x + δ)p − a · xp = lim ∂x δ→0 δ  a · p p xp−kδk − a · xp = lim k=0 k δ→0  δ  a · p p xp−kδk a · p xp−1δ1 = lim k=1 k = 1 δ→0 δ δ −1 = a · p · xp (and you only have to remember this)

Enough with this useless chatter, let us start with the actual topics of this book. Have fun! Chapter 1

Models

Electronics and electrical phenomena in general can be described by the electric and magnetic forces that act on the charges. To calculate the electrical behavior exactly, we would have to consider the Maxwell equations for every electron1. This results in an unimaginable amount of work. In order to simplify these calculations, we usually make models of the problem. Models are simplified or idealized representations of reality, which, in electronics, have the purpose of clarifying things or reducing the amount of work for the calcula- tions. Since a model is a simplification of reality, it is also less accurate. One of the complications in getting a model is finding the correct balance between the complexity of the model and its accuracy. In general:

The simplest model with a sufficient accuracy must be used.

In science, models are usually represented by a collection of equations; sometimes one single simple relation is sufficient. Calculations on circuits which are built up from these models can be performed using simplified versions of the Maxwell equations: the element equations and the Kirchhoff voltage law (KVL) and the Kirchhoff current laws (KCL). In this chapter, we cover the models of some electronic components and their us- age. Here, the emphasis is on the balance between accuracy and simplicity. Another (related) purpose of this chapter is how to deal with boundary conditions of models: the validity range of the model used by you. Models are used extensively in the fol- lowing chapters for calculations and simplification.

1.1 Components

In this chapter, we first develop models for a few electronic components. Such models are usually formulated as relations between the current through it and the potentials across its terminals. These relations can be described in many different ways:

1And if we really want to be precise, we would have to take into account the speed of electromagnetic waves, which follows from the relativity theory.

35 36 CHAPTER 1. MODELS

• in the DC domain, where all reactive elements (those with a frequency-dependent behaviour) are modelled as short-circuit (inductors with ZL = jωL at ω =0) or open circuits (capacitors with ZC =1/jωC at ω =0).

• in the frequency domain, where we use the impedances of components. This implies that a harmonic (sinusoidal) signal is used and that the circuit is (sufficiently) linear.

• in the time domain. In this domain, we use the various element equations (in differential form or integral form). This approach can be used in any circuit, linear / non-linear, stable / non-stable, but usually results in a lot of work.

A special category of electronic components is those where the current flowing through a certain terminal pair can be influenced by some parameter that is not the volt- age across that terminal pair. Example of this include components that can have (power)gain: in MOS transistors and bipolar transistors the output current flowing through an output port is controlled by voltage drop across an input port.

Non-influenceable components

Examples of non-influenceable components are the ideal resistor, the ideal capacitor and the ideal inductor. The adjective “ideal” already suggest that we are talking about a model: a simplification of reality. For the ideal resistor, the following is always true:

· · 1 ir = G v or v = R i with R = /G for the ideal capacitor:

t+t0 1 v (t)=V (t = t0)+ i (t)dt C C C C t0

In this last relation the potential vC is built up by the current iC (t) during some time t. Usually, the time-dependency is not shown in integral form, but in the (shorter) differential form2. The element equation for a capacitor then is:

∂Q ∂v i (t)= = C C ∂t ∂t

2An important reason for working in a differential form, besides the fact that it is more compact, is that the human mind is better capable of observing change (which is based on a comparison to the previous event), relative to “the whole” or “absolute” sizes (of which we usually do not have enough reference material). Reptiles have this even more so: they can only see changes, which is the reason why they can catch bugs flying by, but not bugs which are sitting still. While applying for a job the same holds: active people are easily picked out of a crowd, while passive applicants are easily overlooked. Please note: I am not actively comparing recruiters to reptiles. 1.1. COMPONENTS 37

Influenceable components An example of an influenceable time-independent component is the light sensitive resistor; its current-voltage relation is presented in figure 1.1. If we use the variable E for the luminosity, then at any instant, the following is true: v i = R(E)

A variable quantity or property — in this case the luminosity E — is called a pa- rameter. In general, a parameter will be a physical quantity (air pressure, volume, temperature). We normally use influenceable components that can convert an external signal (non-electric) to the electrical domain as sensors for any electronic system. In an electronic circuit, it is customary to use influenceable quantities in current or voltage.

i E>E1 2

E2

0 v

Figure 1.1: Influence of luminosity on a resistor. 38 CHAPTER 1. MODELS

Electronically influenceable components: controlled components

For just about every useful function in electronics we need some form of amplification, or gain. This is true for both analog and digital circuits. This amplification can be voltage gain or current gain but in general we need power gain: the controlling (input) power is then smaller than the controlled (output) power. According to the law of conservation of energy, the output power cannot be increased without the presence of another source; the output power is controlled by the influenceable component, but delivered by that other source3 (usually a voltage source). The transistor (the name originates from transfer-resistor) is an example of an elec- tronic component where both the influenceable and influencing quantity are in the electric domain. Transistors are power amplifying electronic components, in which the output current (for a bipolar transistor from collector to emitter, iC ) is determined by the potential between collector and emitter (vCE) and a controlling voltage vBE. The characteristics of figure 1.2 describe iC = f(vCE) with vBE as a parameter.

iC v>v iC BE1 BE2 C +) B v v +) CE BE2 vBE -) -) E

0 vCE

Figure 1.2: Transistor characteristics with voltage between the base and emitter (v BE)as parameter.

Domains

Electronic systems are used to perform useful tasks. Without exactly specifying which task is useful, we can generally state that a useful electronic system uses input and output signals which represent something in the physical world. These physical signals can have many different forms, while these forms (like heat, light, sound, movement, electrical signals) are usually called domains. This is roughly how the chemical and physical domain can be separated, but in more detail, we can identify the thermal, optic, acoustic, (Newton)-mechanic and elec- tric domains. The energy in a component is usually spread over multiple domains. Current through a wire under the influence of an electric field (electrical domain) al- ways comes with the collisions (mechanical) between particles (friction) which leads to changes in temperature and expansion of the material. Components, with an unequivocal and reproducible relation between the results of this change in energy distribution and the external observable electric behaviour, are generally noted as electric sensors. Some examples of sensors, where the change in

3Since an element with power gain transforms the energy of a DC source (ω =0) to an AC output power ( at ω =0), every element with power gain must be non-linear. 1.1. COMPONENTS 39 energy distribution are external observable by (changing) currents and/or potentials, are:

• microphone : acoustic to electric

• piezo-element : mechanic to electric

• thermocouple : thermal to electric

• accumulator : chemical to electric

After the electronic manipulation of the signals, the resulting electric quantity is usu- ally transferred to an action in another physical domain. Because of this action, these components are usually called actuators. For example:

• laserdiode : electric to optic

• disk write head : electric to mechanic or thermal

• piezo-element : electric to mechanic

• loudspeaker : electric to acoustic

• LCD : electric to mechanic / optic

• accumulator : electric to chemical

• antenna: eletric to electro-magnetic

The complete group of sensors and actuators are usually called transducers. The boundary of an electronic system usually runs through the transducers. The electronics are, as said earlier, mainly focused on the dotted line in figure 1.3.

power supply

electrical electrical non non electric electric transducer operation actuator

electronics

Figure 1.3: Electrical domain. 40 CHAPTER 1. MODELS

1.2 Analysing and modelling circuits

The behaviour of electronic components and hence of circuits, is determined by mainly the laws of Maxwell. Strictly using these rules in actual circuits leads to very involved calculations and non-interpretable results. The solution is to use models that take only the relevant issues (for the problem at hand) into account, and neglect everything else. For example, real resistors can usually be modeled with an ideal resistor, at suffi- ciently low frequencies. In this context, “sufficiently low frequencies” is related to the size of the component and the signal frequencies, due to e.g. the propagation speed of an EM-wave. More accurate models of physical resistors include things like a parallel capacitance and series . These non-ideal effects must only be taken into account if they are significant; the use of an overly general model leads to a lot of useless work and very little understanding.

inductance resistor

R L R series

L series L series C par C par

Figure 1.4: An example of a model.

In electronics, the most important components are those which are non-linear, since they are needed for power amplification. In their normal operation, amplifying compo- nents are exponential (bipolar transistors) or quadratic (MOS-transistors). In general, the analysis of non-linear circuits can be performed in three different ways: 1. if the non-linear electric characteristics are known (measured, modelled, ...) the solution for simple circuits using these non-linear components can be found using graphs or combinations of graphs (a truly graphical method existed well before graphical calculators were available). This used to be method of choice back in the old days. 2. if a mathematical expression (e.g. a polynomial function) is known that suffi- ciently accurately models the non-linear behaviour, solutions can be obtained using (analytical or numerical) calculation methods. Numerical solutions can easily be performed by circuit simulation software, while analytical solutions may be hard if non-linear components are involved. 3. if a non-linear component is operated in a small region around a bias point, the non-linear characteristic may be sufficiently accurately modelled using a low-order Taylor polynomial (preferably a first-order one). This way, linear techniques from network theory can be applied. This last method is called linearization; its main advantage is that calculations can easily be done by hand, yielding readable and usable relations. Its main disadvantage is that it really is a model with all its limitations in accuracy and its associated validity boundaries. In this chapter, we will limit ourselves to a number of applications of the non-linear “diode”. 1.3. IDEAL DIODE MODEL 41

1.3 Ideal diode model

The characteristic behaviour of an idealized diode is shown in figure 1.5; its i − v- curve satisfies the non-linear relation below. The semiconductor physics around this relation are beyond the scope of this chapter: for now only the non-linear characteristic is important as it is used for the introduction of modelling non-linear components. · q vD iD = I0 e kT − 1

i +) D

viD D reverse forward -)

0 vD

Figure 1.5: The diode and its i-v-relation

Calculations using this (idealized) exponential element equation even in a simple cir- cuit consisting of a diode and a resistor already proves to be quite hard, requiring Lambert-W functions to get closed analytical expressions. It yields hard-to-read ex- pressions that don’t give any insight; the only value may be the development of math- ematical skills which is not the goal of this book. To be able to get sensible relations, a model must be made: an abstraction of the real behavior that is sufficiently accurate but is also sufficiently simple to get readable and interpretable relations:

calculation too hard → make and use a suitable model

A very much simplified model for the actual i-v-characteristic of a diode — see figure 1.5 — can be a crude approximation of the curve by two linear parts: the forward and reverse part. The most simple approximation is then :

• forward domain: iD > 0,vD =0,RD =0Ω

• reverse domain: vD < 0,iD =0,RD = ∞ Ω

This model will be denoted the ideal diode model; figure 1.6 shows its symbol together with its vD −iD-characteristic. As for the circuit, the ideal diode can be represented by either an open (for vD < 0 V) or as a short (for iD > 0 A). Note that using this model (approximation of real behavior) the diode is a linear device, although with a different representation in reverse and forward. In analyzing a circuit with this ideal diode model, either of the two linear equivalents must be used. Selecting the correct model may be done using insight, luck or in a systematic way using “reductio ad absurdum”. 42 CHAPTER 1. MODELS

i +) D

vD iD reverse forward -)

0 v D

Figure 1.6: The ideal diode and its i-v-relation.

Reductio ad absurdum: Suppose all the ideal diodes are in reverse; deter- mine the potential across every ideal diode. The diodes, where a positive potential is present, are in forward, and hence have a voltage of 0 V. De- termine again the potentials across all diodes and repeat this procedure until there are no more changes.

The ideal diode and analysis according to “Reductio ad absurdum” is used in the fol- lowing subchapters on a number of different circuits:

• gate circuits,

• rectifiers,

• clippers and

• clampers.

Before we dive into these application, we add some more accuracy into the ideal diode model, only to be used for calculations that do need more accuracy. As stated earlier, making a more accurate model yields more accurate results at the cost of more complex calculations. In manual analysis, the simplest model which gives sufficiently accurate results should be used; for numerical analysis this is also true although computational power nowadays is sufficient to simulate most circuits in (almost) no time. 1.3. IDEAL DIODE MODEL 43

A DC-voltage shift If we compare the real diode characteristics with the ideal characteristics, we notice that the curve might be approximated better using a switch that switches to forward at a non-zero voltage. As stated earlier, this expansion of the most simple model need only be used if there is any need for it: always use the model which is as simple as possible while giving enough accuracy.

+) iD

viDD + reverse forward VD -) - 0 vD VD

Figure 1.7: The ideal diode with DC-voltage shift and its i-v-relation

The new, more realistic diode model is represented in figure 1.7, for which:

• forward: iD > 0,vD = VD,RD =0Ω

• reverse: vD

A DC-voltage shift and series resistance The previous two models for a diode may not always be sufficiently accurate. Noting that the previous two models are zeroth order models (using the zeroth order term of a Taylor expansion) a straight forward model accuracy improvement is adding a first order term. This first order term corresponds to the slope of the curve, that can be modelled by an additional series resistance, see figure 1.8. In this model:

+) iD forward iD v D + reverse VD -

R D 0 vD -) VD

Figure 1.8: The ideal diode with DC-voltage shift and series resistance and its i-v-relation

• forward: i > 0,v= V + i · R ,R= ΔvD/ D D D D D D ΔiD

• reverse: vD

iD

D ID I D D VD

VD vD

Figure 1.9: Calculation of the serial resistor at a specific bias point.

Zooming in on the diode curve in figure 1.8, we see that a good approximation in the region round the point {VD,ID} is done with the tangent at that point on the curve: · − iD = ID + g (v D VD)  −1 ∂iD  with g = r =  ∂v D in {VD,ID} Note that these relations are nothing more or less than a first-order Taylor expansion of the non-linear curve in the point {VD,ID}, see figure 1.9. 1.4. DIODE MODELS AND TIME-INDEPENDENT CIRCUITS 45

1.4 Diode models and time-independent circuits

Figure 1.10a and 1.10b show two logic-gate circuits and their respective truth tables. Because the used elements are diodes, these gates belong to the “diode logic”. The table is based on positive logic, so a signal (a, b or c) is viewed as logic “1” at a certain positive voltage, and as a logic “0” at a zero voltage.

R

a D1 a D1 +)

VDD -) b D b D 2 c 2 c

R out out

a) b)

Figure 1.10: Gate circuits

a b c a b c 0 0 0 0 0 0 1 0 1 1 0 0 0 1 1 0 1 0 1 1 1 1 1 1 a) b)

Table 1.1: Truth tables for the circuits in figure 1.10.

As a starting point for the analysis of the OR-gates in figure 1.10a, we assume a=“1”, b=“0”. Furthermore, the potentials a and b, are assumed to be due to an ideal voltage source.

Analysis

1. For a positive voltage at node a , assume (arbitrary using “reductio ad absur- dum”) for example no current flowing through D1; then:

2. The potential at node c would be low (= 0 V). Such a voltage over D1, however, would generate a very large current, in contradiction with the assumption.

3. Conclusion: D1 is in forward and the voltage at node c is “1”. Using symmetry it is easy to derive that the output voltage is “1” if either one or both of the input signals is “1”.

The analysis of the AND-gate circuit of figure 1.10b uses the same principle. Then we find: c is “1” if inputs a and b are “1”. It is a good exercise to see whether you find the same truth tables if you choose a different starting point. 46 CHAPTER 1. MODELS

“Diode logic” circuits can be expanded with more diodes to get more inputs, or they can be combined into more complex logical circuits. The usability of gate circuits, which exclusively use diodes and resistors, is limited for a number of reasons: The first problem is that the signal level decreases for every (non-ideal) diode. This is illustrated by the example in figure 1.11 where three OR-gates are in series. In this example, the diode model of figure 1.7 is used: the ideal diode with a serial voltage source. The leftmost OR-gate has 5 V (“1”) at its top input node; the lower inputs of

+ 0.6V - + 0.6V - + 0.6V - 5V 4.4V 3.8V 3.2V out

0V 0V 0V

1kW 4.4mA 1kW 3.8mA 1kW 3.2mA

Figure 1.11: A cascade of three OR-gates all gates are at 0 volt (“0”). We assume that a real physical diode has a voltage drop of about 0.6V if the diode is carrying a current. This causes the voltage at the output of the leftmost gate to be about 0.6V lower than 5 V, hence 4.4V. This 4.4V is the input voltage of the top input node of the middle gate. This results in another 0.6V voltage drop across the diode, causing the output volt- age to be even lower. The same holds for the rightmost gate, resulting in a total output voltage of just 3.2V! This output voltage is still assumed to be on a “1”-level, although it is 1.8V lower than the original signal on the leftmost logical gate. A second problem is that the controlling source, that delivers the signal for the circuit, is loaded quite heavily. Furthermore, we directly see that this load can increase with the number of gates in the circuit. A third problem is that the diode logic can only be used to build OR- and AND- gates, while to implement an arbitrary logic function also an “invers” operation is required. The above problems can be solved by adding a component with negative voltage gain. A direct expansion of diode logics with a simple bipolar inverting amplifier (see later in this book) results in the known TTL: transistor-transistor logic. For nowadays’ integrated circuits, CMOS-logic is used which is a clever CMOS alternative to TTL. 1.5. DIODE MODELS AND TIME-DEPENDENT CIRCUITS 47

1.5 Diode models and time-dependent circuits

In the following paragraphs, a number of circuits containing combinations of diodes and capacitors are discussed briefly. Its purpose is to show the usage of the ideal-diode model in time-dependent situations. The following will be discussed:

• Rectifying with ripple smoothing (AC-DC-conversion)

• Clipping, detection

• Clamping

• Voltage multiplication

Rectifier circuits A rectifier circuit is used for the conversion of an AC-voltage (or current) to a DC- voltage (or current). The requirements for such a rectifier are:

• positive parts of the input AC-voltage must be directed to the positive output, or

• negative parts of the input AC-voltage must be directed to the negative output, or

• the above combined.

A circuit which satisfies the first item is given in figure 1.12. We call this a half-wave rectifier, since only one half of the input AC-voltage is used. The circuit is driven by a .

vSEC

+) VD -) i 0 +) t

R v vPRIM vSEC OUT -) i i= 0,32 i 0 t

Figure 1.12: Half-wave rectifier with transformer, diode and resistor.

Analysis According to Kirchhoff’s voltage law, the mesh at the secondary side of the transformer is: vSEC = Vsec · sin(ωt)=vD + i · R (1.1) For every diode, we have:

vD =0 for iD  0

iD =0 for vD < 0 48 CHAPTER 1. MODELS

Now, we can distinguish two different cases. Noting that every ideal diode has 2 possible states, a circuit containing of N diodes can have 2N different states. For figure 1.12, we have:

vSEC > 0 → vD + iD · R>0

→ vD > −iD · R with vD ≤ 0 for the ideal diode

→ iD > 0 en vD =0

vSEC < 0 → vD + iD · R<0

→ vD < −iD · R with iD ≥ 0 for the ideal diode

→ vD < 0 en iD =0

Conclusion: during the positive part of the source voltage the diode is in forward; Vsec · then vD =0and hence the source voltage equals the output voltage and i = R sin(ωt). During the negative half of the input voltage, the diode is in reverse and hence then iD = i =0, see figure 1.12b. The average value of the voltage across the resistor R, and the average current through the resistor are:

T/2 1 V v = V · sin(ω · t) dt = sec ≈ 0.318 V (1.2) OUT T sec π sec 0 1 1 V V i = ˆi = sec ≈ 0.318 sec R π π R R The half-wave rectifier gives a non constant output voltage. However, for many ap- plications — among others for usage as power supply for electronic equipment — we need a (constant) DC output voltage. The circuit of 1.12 does not satisfy this condition: it gives a very significant ripple.

Rectifier with ripple smoothing The main problem with the circuit of figure 1.12 is the fact that the output ripple is large. The cause lies in the diode, which is only in forward for a small amount of time, while for a (more or less) constant output voltage, we usually need a constant current. The solution here is adding a charge reservoir, from which the charge can be drained if the diode is not in forward. Such an addition is called a smoothing filter. The most basic smoothing filter consists of a capacitor that is large enough, see figure 1.13a. Figure 1.13b shows the output voltage VL as function of time for a given value of C. The capacitor acts as a charge reservoir. If the diode is in forward, capacitor C is charged; if the diode is in reverse, C is discharged by the load current. If we assume an ideal diode, and assume a (periodic) stationary state, then the diode goes in forward when the input voltage vSEC(t) is equal to the capacitor voltage vL(t), which is the time t1 in figure 1.13b. Diode D is in forward as soon as and during the time that the source voltage v(t) is higher than or equal to vL(t). During this time, C is charged. After the voltage v(t) passed its maximum, D will go in reverse. This happens as soon as the current through 1.5. DIODE MODELS AND TIME-DEPENDENT CIRCUITS 49

iD D i L

v,i +) +) vL

v = v(t) C R vL SEC iD -) -)

0 tt1 2 t3 t a) b)

Figure 1.13: Half-wave rectifier with smoothing capacitor

D would change direction, or, in other words, when the source voltage decreases faster than C can discharge through the load resistor R. At the time instance where the diode is only marginally in forward:

iD(t)=0

iC (t)=−iL(t) ∂v (t) v i (t)=C · L = L (1.3) C ∂t R

This happens in figure 1.13b at t = t2. After this, the diode is in reverse until t = t3. If we use the ideal-diode model, the circuit in figure 1.13 reduces to the two equivalent circuits in figure 1.14.

iD + v D - +) +) +)DFORWARD +) DREVERSE

v(t) C R v L(t) v(t) C R v L(t) -) -) -) -)

a) D is short when iD12 > 0; t

Figure 1.14: Calculation circuits for the circuit in figure 1.13.

Analysis If the diode is in forward, the original circuit collapses to the equivalent shown in 1.14a. In that equivalent circuit, the output voltage is equal to the input voltage. If the diode is in reverse, the circuit collapses to the equivalent in figure 1.14b in which the output voltage decreases exponentially from some starting value to zero. In equation:

vL(t)=v(t)=Vl · sin(ω · t) for t1

If the capacitor size C is chosen to be large (or actually if R·C is large compared to the period T of v(t)), then the voltage vL(t) between t2 and t3 will only decrease slightly. Resultingly, the DC-voltage component in vL is large and the ripple-component is ∼ ∼ small. We can approximate the ripple by assuming that t3 − t2 = T and iL = Vl/R. During the interval t2...t3, charge is extracted from C, causing a decreasing voltage over C. This voltage decrease is by definition equal to the top-top-value of the ripple 50 CHAPTER 1. MODELS voltage vripple. It then follows that:

t3 ∼ Vl ΔQ = i (t) · dt = · T (1.5) L R t2 ΔQ ∼ Vl · T v − =Δv = = ripple,top top C R · C (1.6)

Now, the average value of vL(t) is ∼ T v = V 1 − (1.7) L l 2RC

This value can be significantly higher than Vl/π as obtained for the circuit in 1.12

Conclusion A large capacitor results in a small ripple voltage; on the downside, the use of a large capacitor is the high peak-value of the current that the source and the diode have to deliver. Please try to derive an analytical expression for the ripple voltage using the (still idealized) exponential element equation for a diode: you may encounter a number of mathematical problems while it does not give you that much extra accuracy or insight.

Full-wave rectifier In the circuit of figure 1.12 and figure 1.13, we only use half of the input voltage. Therefore, we need a rather large capacitor for smoothing out the ripple, causing the diode having to handle large peak-currents. A solution is using the complete input voltage instead of just half. We then obtain the so-called full-wave rectifier topology. In essence, a full-wave is the same as twice a half-wave rectifier, resulting in the diode- or rectifier bridge (Graetz-) circuit shown in the figure below:

D1 D2 +) i v -) v D4 D3 v,i i 2 i = p i D D 1 2 0 t +) i v -) D4 D3

Rectifier circuit with the use of a diode bridge (Graetz bridge). In the bridge, both halves of the sine are used. For the positive half of the source voltage, diodes D1 and D3 are in forward, while D2 and D4 are in reverse. For the negative half of the sine the complemen- tary situation occurs. 1.5. DIODE MODELS AND TIME-DEPENDENT CIRCUITS 51

It can be concluded that the bridge rectifier is essentially twice a half-wave rectifier. As derived for the half-wave rectifier, the ripple voltage is proportional to the period of the input signal. Since the period is halved for a full-wave rectifier, the same capacitor size gives (about, we made some simplifications...) half the ripple voltage.

Clippers A clipper limits of the amplitude of a signal. The clipper can both be a half-wave or full-wave version. Clippers are for example used in some FM-receivers: if the FM- demodulator is sensitive to amplitude variations, usually the signal is heavily amplified before a clipped version thereof is fed to the actual FM-detector. In this case the FM-detector sees a digitalized signal without any amplitude variation. A different application is the detection of optical data-signals: the analog signals of the photo diode are digitalized after amplification, using a clipper circuit. In general, clippers are applied to limit the amplitude of the output signal, making it more or less independent of the size of the input signal, in order to e.g. protect any other circuitry from a high voltage. There are multiple ways to make a voltage clipper. However, the essence is that the output signal of a clipper is “identical” to the input signal within a set range, and “limited” in any other range. This means that we have some non-linear behaviour, and we need at least one non-linear element. A frequently used approach is presented in figure 1.15.

R i(v) 1 i +) +) VOUT VIN R(v) -) v -)

Figure 1.15: Essence of the behavior of a voltage clipper.

The circuit of figure 1.15 is a resistive divider with at least one highly non-linear re- sistor. A possible implementation is given in figure 1.16a. Please verify for yourself that this implementation gives the desired clipper function. The voltage transfer of the clipper in figure 1.16a is given in figure 1.16b. As an example, this circuit will pro- duce the output signal of figure 1.16c, when presented an input signal of figure 1.16d. Note that the input signal is rotated −90◦ for easy graphically constructing the output signal: mirroring the input signal in the transfer function yields the output signal. 52 CHAPTER 1. MODELS

vOUT VA VA

vOUT +) 0 vIN t -VB +) -VB V b) c)

IN v VOUT IN +) +) -V

-) V VAB-V B A -) -) -) a)

t d)

Figure 1.16: a) Clipper implementation b) the transfer c) the output signal d) the input signal.

DC-clamp A DC-clamp is a circuit that adds a DC-component to the input voltage of a circuit. The value of the added DC-component is usually derived from the AC input voltage. One of its applications is in the detection of analog video signals. Video signals contain a DC component before transmittance: the light intensity of a monitor or TV screen is always positive, hence the average intensity is also positive. Before transmit- ting this signal, the DC component is filtered out. In the receiver, the original video signal must then be reconstructed: the original DC component has to be restored in the video signal4. This DC-level restoration is usually done using a DC-clamp or DC- restorer. If we observe the list of demands for a DC-clamp, we find a number of issues:

• The DC-clamp is non-linear

• A memory element is needed (for storage of the shift)

• The circuits up to now have only existed out of R, D, C and L’s (boundary condition).

Since the DC-clamp is non-linear, we need at least one non-linear element. Until now, the only element we have is the diode5. For the memory element, we can use a capacitor (voltage memory) or an inductance (current memory). Noting that voltages add up by placing sources in series, and currents add up by using parallel sources, we get two possible ways to clamp a signal, see figure 1.17. In these circuits the signals are shifted in such a way, that the output signal is always positive. Please verify that expanding/changing the circuit to get other shifts or to negative output signals is straightforward. Below you can find a short analysis of the behaviour of the voltage clamp in figure 1.17b. For clarity, figure 1.18 gives the resultant for an arbitrary input signal.

4In order to know the size of the DC component, a so-called “ultra-black” signal is sent in the video signal. This is the lowest signal sent and its level corresponds to a light intensity of zero. 5In the next chapters we will introduce other non-linear elements, giving many more possibilities to implement non linear functions. 1.5. DIODE MODELS AND TIME-DEPENDENT CIRCUITS 53

C

v +) -) +)

+) i OUT v v i IN i L IN OUT -)

-)

Figure 1.17: Basic DC-clamp circuits with D,L,C: a) current clamp b) voltage clamp.

Analysis We use the “ideal diode” model and assume that the capacitor is initially uncharged. For a positive input signal, the diode will be in reverse: the diode current then is zero. As a direct consequence, the voltage across the capacitor is unchanged: vC =0. This immediately results in vOUT = vIN. For a negative input signal, the

C

v +) -) +)

+)

vIN v OUT -) -)

vIN vC vOUT

t t t

Figure 1.18: A possible clamp circuit with diverse signals. diode is in forward, hence with vOUT =0. The capacitor will now be charged to the negative peak value of vIN. Directly after vIN has reached its minimum value, the diode will go to reverse again, the output voltage vOUT is now shifted positively compared to the input signal vIN. According to Kirchhoff, vOUT = vIN − vC , while vC equals the previously most negative value of the input signal. The charged capacitor adds a (in this case positive) DC-voltage component to the output signal. The circuit in figure 1.18 shifts the input voltage with a voltage equal to the most negative previous value of the input signal. This means that once a voltage is stored in the capacitor, it can never get smaller (in theory). In practice, this can be quite complicated, and you might need some way to erase it. It may be obvious that this corresponds to the “leakage” of the to be stored shift. For the circuit in figure 1.18, this can be obtained by allowing the capacitor be drained until it is empty, see figure 1.19 for a possible implementation. For the current DC-clamp, leakage can also be implemented. Find out for yourself how this can be achieved. 54 CHAPTER 1. MODELS

v C vOUT +) -) v +) +)

vIN D R vOUT -) 0 -) vIN t

Figure 1.19: Shift of the average DC-level of a periodic signal with leakage.

Voltage multiplication

Some systems need a relatively high voltage to operate: significantly higher than the source voltage. An example of this kind of system is the voltage generators for controlling a monitor: TV tubes need some thousands of , which have to be powered from a 220V mains outlet; many mobile-phone screens need 8 V to operate, while the battery only delivers 3 V. A different application is the charge pumps in EEPROM memories: for programming and erasing, a voltage of 7-15V is needed, which has to be made from a 1-5V power supply voltage. A method for increasing the voltage to values well above the source or supply voltage, is the use of . A huge disadvantage of transformers is that they are quite large and expensive, while they are very difficult to integrate in an IC. Therefore frequently so-called charge pumps are used instead of transformers. Below, two charge pumps are discussed briefly.

Voltage doubler

With the earlier mentioned clamping techniques, a capacitor (for instance) can be used to create a DC- voltage which is equal to the difference between the maximum and minimum of the input signal. As- suming a sinusoidal input voltage, the output voltage is then would be equal to twice the input signal magnitude. +) +)

vPRIMv SEC -) +) ^ ^ -) V V -) +) -) Voltage doubler Using a positive clamp and a negative clamp, see the circuit above, a voltage doubler (with rectification) is created. The circuit cannot be expanded towards higher output voltages because the output voltage is DC.

Cascade of charge pumps

A decent mixture of rectifying and clamping is in the circuit shown below: it is a series of two identical clamp/rectifier circuits, both composed out of 2 capacitors and 2 diodes. Every subcir- cuit increases the output voltage with 2 · vˆ, where vˆ is the maximum value of the input signal. 1.5. DIODE MODELS AND TIME-DEPENDENT CIRCUITS 55

vv

t t

D4 C3 +)

D3 C4 D2 C1

VOUT +) V C IN D1 2 -)

-)

v vv

t t t Charge pump.

Analysis Using the fact that vIN delivers a voltage to the capacitor C1 — equal to the maximum value vˆ — through D1, we see that the voltage across D1 varies from 0 V to 2 · vˆ. In other words: the voltage across D1 is the voltage vIN, “clamped” on a DC-voltage component of vˆ. The same holds for the combination D2 − C2, which leads to a DC-voltage of 2 · vˆ across C2. The process of transporting charge to C2 is a stepwise process, which resembles the pumping of charge through two valves. The system is repetitively expandable using C-D-combinations and can be used for generation of very high DC-voltages. Note that the signal shapes in the previous figure are valid only in steady state, a long time after switching on the circuit. Although the circuit can generate huge voltages, it is not an amplifier: the voltage amplitude is (ideally) unchanged within the entire circuit. Only the DC-level of the signal changes. 56 CHAPTER 1. MODELS Chapter 2

Summary of semiconductor physics

2.1 Introduction

This chapter presents a brief introduction into semiconductor physics and the behavior of the most important nonlinear electrical components: the diode, the bipolar junc- tion transistor (BJT) and the MOS-transistor. At the end of this chapter you should have a basic understanding of how semiconductor components work. The theoretical underlying physics and other in depth stuff is the subject of other courses. Nonlinear components are necessary in any system that:

• fulfills nonlinear functions. This could be anything: translating analog to digital signals, frequency conversions in transmitters and receivers, or digital functions. Almost every serious electronic system needs nonlinear components to function properly.

• requires power gain larger than 1. Power amplification is used to boost small signals to a sufficient power level, but is also essential to compensate for signal losses in a system. In components with power gain (larger than 1) usually a large output power is controlled by a much smaller input power. The difference in output power and input power (and dissipated power!) is supplied by some energy source, which usually is a DC current source or DC voltage source. Because these power supplies usually supply the power at 0Hz and the signal to be amplified usually has a non-zero frequency does not, frequency conversion is necessary. And since frequency conversion is not possible with only linear components, every amplifier with power gain larger than unity requires nonlinear parts.

The electrical conductance of a material is determined by the layout of its atom grid (i.e. the lattice structure) and the type of atoms in the lattice. In semiconductor physics typically a nice regular lattice is assumed. Then the electrical properties of the material are completely determined by the internal structure of the atoms involved. As you know, an atom consists of a core which is surrounded by bands, each of which can contain a number of electrons. Now, the following situations can arise:

57 58 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS

• The bands closest to the core are completely filled with electrons, while the other bands are completely empty. In the filled bands, electrons have no place to go, while in the empty bands, there are no electrons to move around. In this case, there is no possibility for electrons to move through the lattice and thus the electrical current is always 0 A. Materials with this property are called isolators. • The bands closest to the core are completely filled with electrons, while the first band that isn’t completely full is also not completely empty. In the full bands electrons still cannot move, but in the partially filled band they can. Even if there is only one electron free to move per atom, there is a huge amount of electrons that can move around already in very small volumes of semiconductor material. These electrons can move freely through the entire lattice, and this kind of materials are conductors.

• The bands closest to the core are completely filled with electrons, while the first band that isn’t completely filled, is almost empty. Not completely empty, but almost. Effectively this means that just 1 electron per many atoms can move around in the lattice. This is the case in what is called semiconducting mate- rials. Semiconducting materials have very nice properties that can be used to create nonlinear components. In the next section, we will explore these kind of materials further.

2.2 Semiconductors

Electrically speaking, most of the bands in an atom are not very interesting: • completely filled bands are full. Electrons cannot move anywhere simply be- cause there is no place for them to move to. These bands thus do not contribute to the electrical conductance.

• completely empty bands don’t contain any electrons and thus do do not con- tribute to the conductance either. Therefore, from now on, we only consider the two outermost bands of a (semi)conductor that contain electrons. The outermost of the two is denoted as conduction band, while the inner is called the valence band. In any semiconductor, the valence band can hold (per atom) as much electrons as there are electrons available for this band: the valence band can hence be filled exactly. If this is the case then there are no electrons left for the conduction band, which thus remains completely empty. The semiconductor now acts as an insulator... The nice thing about semiconductors is that the valence band and conduction band are very close to each other, making it possible for electrons in the valence band to gain enough (thermal or electrical) energy to “jump the gap” to the conduction band. Once these electrons are in the conduction band, they may loose energy and fall back into the valence band1. 1This process is very similar to water that isn’t at boiling temperature, some molecules have enough energy to evaporate to the gas state. At the same time, some water molecules in the gas state may loose energy and go to the liquid state again. In a closed environment then a dynamic equilibrium results in which the amount of molecules that evaporate equals the number of molecules that go to the liquid state. 2.2. SEMICONDUCTORS 59

3s4 +3p4 =4

2s2 +2p6 =8 conduction band

valence band +N 1s2 =2

Figure 2.1: Atomic structure of a semiconductor (here Si): the 3s band can be completely filled and the 3p band then is empty. These two bands are sufficiently close that some electrons can ‘evaporate’ to the conduction band.

Once an electron has evaporated from the valence band, across the band gap to the conduction band it has an enormous amount of space to move around freely: such an electron can contribute to the electrical conduction. At the same time, that electron leaves a void behind in the valence band, which is denoted as a hole. The charge of a hole is positive: it corresponds to missing one electron. The hole can also con- tribute to the electrical conduction. Note that moving a hole through a lattice is due to subsequent moving electrons into that hole: the hole moves therefore in the opposite direction of the electrons!

Figure 2.2: Holes in the valence band contribute to the electrical conduction just like a hole can move through a slider puzzle: by moving the pieces, the location of the hole changes.

Because the movement of a hole is due to subsequent movements of different electrons, one might guess that the electron in the conduction band can move easier than the hole in the valence band. The ease of moving through the lattice for both the electron (negative charge carrier) and the hole (-electron, positive charge carrier) are usually expressed in mobility. The mobility tells you how much speed (m/s) you can get in some electrical field (V/m), making the unit of mobility m2/V · s, or more commonly used in the field of semiconductors cm2/V · s. In silicon the mobility of holes is about a factor 2 or 3 or e lower than that of electrons. 60 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS

The most commonly used semiconductor is silicon (Si), whose valence and con- duction band can hold 4 electrons per atom. Since it is a semiconductor, it has 4 elec- trons available for those 2 bands: it is a ‘group 4’ (or group IV)’ atom in the periodical table.

+

Figure 2.3: Doping is the replacement of one in many Si-atoms by a ‘group III’ or ‘group V’ atom. This results in a semiconductor with excess mobile holes (P) or excess mobile electrons (N), respectively.

Semiconductor materials with only one flavor of atoms are boring and not very useful. This is why we use doping and doped semiconductors.

• In undoped — or intrinsic — semiconductors, the number of electrons in the conduction band equals the number of holes in the valence band, since both are caused by the same ‘evaporation’.

• If a small number of original atoms are replaced by ‘group III’ atoms, there will be too little electrons in the valence band. In other words, even without any ‘evaporation’ there will be holes in the valence band that can move freely. Since holes are positive charge carriers, we call this a P-type semionductor.

• If a small number of original atoms is replaced by ‘group V’ atoms, there will be more electrons than can fit the valence band: these electrons can only find a place in the conduction band, where they can move freely, because of all the space there. Because electrons have negative charge, this kind of material is called N-type semiconductor. 2.3. DIODES 61

2.3 Diodes

Electrons flow from a low potential to a high potential, whereas holes flow from high to low potential. This seems strange, but it is because around 1900 a.D. some guy defined this the wrong way and now we have to deal with that forever. Now, if we take a semiconducting lattice, one half of which is N-doped (excess electrons) and the other half with P-dope atoms (excess of holes), then:

• a large current (of holes and electrons) can flow if the N-doped side has a lower potential than the P-doped side. This is possible because there are a lot of elec- trons that can (and will) flow to a higher potential, and there are lots of holes that can (and will) flow to a lower potential.

• electrons and holes will stay where they are if the polarity is reversed: electrons are already at the higher potential at the N-doped side and likewise the holes at the P-doped side are already at the lowest potential. None of these carries is inclined or attempted to move. Only the few holes in the N-type region that are due to ’evaporation’ of electrons to the conduction band (and the few electrons in the P-doped region) flow in this case. As there are just few charge carriers, the resulting current is quite low.

• it turns out that the voltage-current relation of the diode is:

· q vD iD = ID0 · e kT − 1

where ID0 is a technology, temperature and size dependent constant. Further, q is the elementary charge, k the Boltzmann constant and T is the absolute temperature. The factor vD is the imposed voltage across the P-N-structure. Such a component is usually called a pn-junction or diode. In a diode, there can hence be two major current components (and two minor ones that make up the leakage current and that are neglected here): an electron current from n to p and a hole current component from p to n, if the potential at the p-side is higher than that on the n-side. Both parts contribute to a conventional current (in Amp`ere) in the same direction: from p to n, since electrons and holes are oppositely charged en flow in opposite directions. 62 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS

+) -) i P-Si P-Si +) D (excess holes) (excess holes

p n p n vD iD vD iD vD iD N-Si N-Si reverse forward (excess electrons) (excess electrons) -) -) +) 0 forward reverse vD

Figure 2.4: A diode is a component which combines an N-doped and a P-doped semiconductor, respectively having lots of mobile electrons and having lots of mobile holes. Holes go from + to - and cause a hole current from p to n only if Vp >Vn. Elektrons flow from - to + and then deliver an electron current from n to p. Both give rise to a current (in [A]) from p to n.

2.4 Bipolar junction transistors (BJTs)

A Bipolar Junction Transistor (BJT) is a smart expansion of a diode. In a diode, holes flow from p to n and electrons flow from n to p, where for both current components the vi-relation is exponential: · q vD ip = IP 0 · e kT − 1 · q vD in = IN0 · e kT − 1 · q vD iD = ip + in = ID0 · e kT − 1

In a diode, both current components flow through the diode. They must, since there is no other place to go in a device with only two nodes... However, in a BJT one of these two current components is redirected to a new — third — node that collects this redirected current component. Unfortunately, the other current component is still present and this current component represents an unwanted — drive — current. A schematic view of this principle is shown below. Right next to the cross sections of the NPN and PNP are the schematic symbols for these transistors. The emitter is identified by the arrow; the direction of the arrow is in the direction of the current flow in [A]. The collector is the opposite node and the base node is in the middle. The symbol itself resembles the construction of the very first bipolar transistor as constructed at Bell Labs in the late 1940’s. In order to get significant currents one of the junctions must be in forward. That way, both a large electron and hole current component result. Only one of these components will be directed to the third node to create the output current. To ‘catch’ this wanted component the other junction must be in reverse (or at least far less in forward). Nam- ing the parts:

• The region from which the charge carriers that form the wanted current compo- nent originate, is called the emitter, which is Latin for “send out”

• The region where the this wanted current component flows into, is called the collector, based on the Latin for bin or drain.

• The middle region is called base, for historic reasons. 2.4. BIPOLAR JUNCTION TRANSISTORS (BJTS) 63

iiCE

N-Si P-Si excess electrons) excess holes) C C

forward reverse P-Si B N-Si B

B

B

i i nnp excess holes) excess electrons) p E E N-Si P-Si excess electrons) excess holes)

forward iC reverse

-iE iC

Figure 2.5: A BJT is a diode in which one of the major current components is redirected to a third node. This can be done in two ways: you can either redirect the hole current or the redirect the electron current.

• The current component that goes all the way through the device is the wanted current iC , which can be influenced exponentially with the voltage across the forward junction vBE. If you do everything right, this current is (almost) in- dependent of de voltage across the other junction vCB and thus is also largely independent of vCE.

• The other current component (the unwanted current that doesn’t flow through the entire device, iB) is proportional to iC . If the device is properly designed the proportionality factor α = iC can easily be around 100. fe iB

There are 2 types of BJTs:

• one type has a wanted current component consisting of electrons that flow from an N-type region through a P-type region to another N-type region. This type of BJT is called NPN-transistor after its internal structure.

• the other type has a wanted current component of holes that flow from a P- region, via an N-region to another P-region. This is called a PNP-transistor.

The behavior of these two types of bipolar transistors is the exactly the same, except for the type of charge carrier. This difference results in a change of current direction and a change of polarity of the voltage on the connectors. Usually the NPN transistor is easier to understand than the PNP transistor: there are less minus signs involved in its element equations. That’s the main difference, really. Summarized in equations, the behavior of a BJT resembles that of a diode. If the BC-junction is in reverse — which is the case in the normal operating range — we have: 64 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS

· ∼ q vBE iC = IC0 e kT − 1

∼ iC iB = (2.1) αfe ∼ αfe +1 iE = iC αfe

The assumption that the BC-junction is in reverse is actually not necessary: the actual requirement for proper operation of a BJT is that the BC-junction is much more in forward than the BC-junction. Noting that the current-voltage relation of a junction is exponential it is sufficient to satisfy

· · q vBE q vBC e kT >> e kT and assuming that a current ratio of 100 satisfies this “much bigger” we get

· · q vBE q vBC kT ∼ e kT > 100 · e kT ⇔ (v − v ) > ln(100) = 120mV ≈ 100mV BE BC q

If the vCE is smaller than about 100mV, an explicit dependency on vCE must be in- cluded: · − ∼ q vBE qvCE iC = IC0 e kT − 1 · 1 − e kT (2.2) The operating range in which the right term in (2.2) is significant, is called saturation and hence is for vCE < 100 mV. In saturation, the collector current iC decreases when vCE decreases. Figure 2.6 shows the collector and base currents of a BJT as a function of vBE and vCE, on linear axes. For the curves, a typical α = 100 was used, resulting in a hardly visible iB − vBE curve. In the iC − vCE plot 3 curves are shown, for vBE values each 18mV different. Due to the exponential iC − vBE dependency, each 18mV difference in vBE results in a factor 2 difference in iC .

vBE+18mV

iC

vBE

C C

i i

vBE-18mV

i 0 B 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4

vBE vCE

Figure 2.6: Current-voltage dependencies for the BJT: i C(vBE) and iC(vCE). 2.5. MOS-TRANSISTORS 65

2.5 MOS-transistors

A MOS-transistor is basically an adjustable resistor. Conduction takes place between two nodes: the region from where the charge carriers start to flow is called ‘source’, whereas the drain is called ‘drain’2. The degree of conduction is determined (among others) by the gate-source voltage. In the image below this is shown.

D G G G S D S D

S/B B B (bulk or substrate)

Figure 2.7: The MOS-transistor: a) cross section side view b) simplified model c) symbol

To briefly explain the operation of a MOS transistor, an N-type device is assumed in which electrons make up the current. Similar to the BJT, there is also a P-type device that operates identically although with holes instead of with electrons which results in reversed current directions and voltage polarities compared to the N-device. For simplicity reasons, in this course the middle region — the p− bulk for the N-type MOS transistor — is assumed to be electrically connected to the source region.

Cross section A simplified cross section of an N-type MOS transistor is shown in figure 2.8. This cross section shows the (slightly P-doped) substrate containing a heavily N-doped source region and a heavily N-doped drain region. The controlling element is the gate, which is very low ohmic3 en is insulated from the substrate by a non-conducting layer of oxide. This yields a Metallic-Oxide-Semiconductor-structure.

S/B G D

"metal"

nn++

p-

Figure 2.8: Cross section of an N-type MOS-transistor

The naming convention for the source region and drain region is based on the direction of the flow of the charge carriers: charge carriers by definition move from the source

2After the English words for source and drain, respectively. Who said that science was difficult? 3Gate material is low ohmic, so heavily doped semiconductor materials and actual metals are used for this. 66 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS to the drain. For N-type transistors this means that the source potential is always lower than or equal to the drain potential.

The effect of the gate For a qualitative explanation of the operation of a MOS-transistor, we need four basic principles:

• Electrons flow from a low potential to a high potential;

• Holes flow from a high potential to a low potential;

• A voltage drop across a structure corresponds to an electric field; • · 2 n p = ni , where n and p is the density of electrons respectively holes in a material, and ni is the density of electrons that “evaporates” from the valence band into the conduction band for your semiconductor material. This relation is due to semiconductor physics and will not be motivated or discussed in depth in this book.

Now, we will first investigate a part of the MOS-transistor: we leave the source and drain out for the moment. This is shown in the figure below, and is what is called the MOS-capacitor4.

B G

"metal"

p-

Figure 2.9: Cross section of an N-type MOS-capacitor

We can now distinguish three cases5:

• no voltage difference between the connectors: vGB =0

• the potential at the gate is higher than the one at the substrate: vGB > 0

• the potential at the gate is lower than the one at the substrate: vGB < 0

4The shown element is a 1-port device and can thus only have a very basic function, electrically speaking. Since there is no current flowing from one side to another (because of the insulating oxide layer) it cannot be a resistor, induction or diode-like structure. It does however resemble a plate capacitor en we will see later that it actually behaves as such. 5We will assume that there is no contact potential difference. Taking this into account is actually = + ) quite easy: vgb vgb ϕcontact , but doing so only makes this discussion needlessly complex. For the interested: a contact potential is the difference in voltages between two different materials, just as the ones you would encounter in redox reactions. You can try this yourself by placing a piece of tin foil against a amalgam tooth filling en experience the “contact potential”. 2.5. MOS-TRANSISTORS 67

For a zero voltage drop, vGB =0, the electric field inside the gate oxide is also zero. According to Poisson’s Law this means that there is no charge build-up in neither gate nor substrate. If you don’t know or don’t like Poison’s equation, please think of the structure as a plate capacitance: with zero voltage across the plates the charge storage is zero. It’s got something to do with the element equation of a capacitor Q = C · V . For vGB < 0 there is a non-zero electric field in the oxide layer. Due to this electrical field holes are attracted to the semiconductor-oxide interface while electrons are pushed away from it: due to the vGB < 0 the concentration of holes is increased near the oxide layer, while the electron concentration is decreased (compared to the case where vGB =0). For vGB > 0, also a non-zero electric field will be present, but the polarity has changed. This causes the electrons to be attracted towards the oxide, while repelling the holes. As a result, the electron concentration has increased near the oxide-substrate interface, and the concentration of holes is decreased (compared to the case where vGB =0.

Weak inversion, depletion, strong inversion

In the discussion above, the last case is most interesting: uGB > 0.. In this case, mobile holes are repelled from the oxide-substrate interface and electrons are attracted to it. We can distinguish three levels or attraction:

• For small vGB > 0, a small amount of holes are pushed away: the P-type ma- terial becomes a little “less P”. What remains is called the depletion later. One should note that the concentration of electrons has risen simultaneously, since · 2 n p = ni .

• When vGB > 0 increases, more holes are being pushed away and simultane- ously more electrons are attracted towards the oxide-substrate interface. If the concentration of electrons exceeds the concentration of holes, the semiconductor material is said to be in inversion. If, furthermore, both hole and electron con- centrations are below dope level concentration of the semiconductor material, this region of operation is called weak inversion.

• If an even larger vGB > 0 is applied, even more holes are being pushed away and the concentration of electrons rises even further. When the electron concen- tration at the interface exceeds the dope level of the semiconductor material this region of biasing is called strong inversion.

One can deduce that the current through a MOST in weak inversion is determined by diffusion of carriers, similar to the mechanism in a diode or in a BJT. As a result, the v − i-characteristics of a MOS transistor in weak-inversion greatly resembles that of a BJT. In this book we will neglect the weak inversion mode. The reason for this is that this operating mode is less important6 and this assumption makes calculations and design a lot easier.

6Circuits that work in weak inversion are usually a lot slower dan circuits in strong inversion. Weak inversion is used for slow but energy efficient applications, such as watches and pacemakers. 68 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS

2.5.1 MOS-transistor in strong inversion In inversion, the excess charge carriers that were originally in the bulk semiconductor material (holes in the case of N-type MOS transistors that has a p− bulk material) are being pushed away from the oxide-substrate interface and the opposite change carriers are attracted to the interface. If in an N-type MOS transistor in strong inversion, the concentration of electrons that are sucked to the interface exceeds the original hole concentration the semiconductor material. The value of vGB where the transistor is marginally in strong inversion is called the threshold voltage VT UT . In strong inversion the current mechanism is dominated by drift, similar to the mechanism in plain resistors. Therefore, the MOS transistor in strong inversion is similar to an adjustable — but non-linear — resistor: there are two n+ terminals (source and drain) connected by a now N-type region, in which the concentration of mobile electrons can be varied by adjusting vGB. For low vDS, the situation is depicted in the following figure:

G G G

n+ n+ n+ n+ n+ n+ S D S D S D p- p- p-

B B B

“no” inversion thin thick layer inversion layer inversion layer

Figure 2.10: N-type MOS-transistor: ”off”, and in strong inversion.

If there is no inversion layer present, the conduction between source and drain is zero7. If there is an inversion layer present, we effectively created a resistor, where the resis- tive value depends on the thickness of the inversion layer: + + n - n - n in a thin inversion layer; for small vGS(= vGB) + + n -n-n in a somewhat thicker layer; for somewhat higher vGS + + n - n- n in a thick inversion layer; for high vGS The electrical conductivity of a material is proportional to the product of mobile charge carriers and their mobility. From this it becomes clear that by varying the electron concentration between source and drain, we can alter the conductivity between source and drain. Of course it is possible to derive a mathematical expression for this relation, but we will not do it. We will however give the resulting element equations in the next section.

7Because of the absence of an inversion layer, the only possible current flows are due to current leakage en weak inversion conduction. These two mechanisms will be neglected during our discussion of strong inversion. 2.5. MOS-TRANSISTORS 69

2.5.2 MOS-transistor in strong inversion: summary For the MOST to be in strong inversion:

vGS >VT (2.3)

The region where there is an appreciable effect of VDS on the drain current is denoted the “linear” or “triode” region. These names originate from the quite linear depen- dence of the drain current on both VDS and VGS for low VDS values, and its close resemblance with the electrical behavior of a triode . In most circuits an- alyzed and designed throughout this book, this region is avoided. The set of element equations of an NMOS transistor in this region is:

v2 i = K (v − V ) v − DS (2.4) D GS T DS 2 iG =0

vDS  vGS − VT

vGS >VT W K = · μC L

The factor K comprises some technology parameters — mobility and the oxide capac- ity — and dimensions of the MOST. In this course, we will assume K to be a known and fixed for transistors in assignments and in lab work. The threshold voltage is also determined by the technology — mainly by the dope levels and the oxide thicknesses — and will also be assumed constant and known. For vDS >vGS − VT , the relations in (2.4) do not hold anymore. The element equa- tions for the region where vDS >vGS − VT can be obtained from the ones in (2.4) by substituting vDS = vGS − VT . Then the iD is almost independent of vDS, with 8 a square law dependence between iD and vDS; this region is denoted as saturation while the element equation is called the square-law relation:

1 2 i = · K · (v − V ) (2.5) D 2 GS T iG =0

vDS  vGS − VT

vGS >VT W K = · μC L

8 Be careful: for BJTs, the saturation range is the range where vCE is low. For the MOST to be saturation, vDS should be high 70 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS

Figure 2.11 shows the characteristics of iD as a function of vGS for a few values of vDS. For the curves, a threshold voltage equal to 1V was assumed for simplicity reasons. Note that the square law relation holds for the iD−vGS curve with vDS > 2V : the other two iD − vGS curves go into the “linear” region defined in (2.4).

vDS>2V vGS3

v =1V DS GS

=v

D D DS T i i v-V vGS2 vDS=0.5V

vGS1 0 0 0VT 23 0123 vGS vDS

Figure 2.11: The characteristics of an N-type MOS-transistor in strong inversion.

The iD − vDS curves show square-law behavior — according to (2.5) — for vDS > vGS − VT and “linear” behavior according to (2.4) elsewhere. Note that 3 iD − vDS curves are shown for different vGS. 2.5. MOS-TRANSISTORS 71

2.5.3 MOS-transistor symbols Figure 2.12 shows both the cross section and the circuit schematic symbols for NMOS transistors and PMOS transistors. These two are — similar to NPN’s and PNP’s — basically only their complement: they operate on the opposite type of charge carriers and hence have opposite voltages and current directions. Identical to the simuation for BJT’s, the source node is marked by the arrow where the direction of that arrow is in the direction of the current in [A].

S/BGGDD S/B

iiDS "metal"D "metal" S

G

G

i i

np++np++GG S D iE iD pn--

Figure 2.12: Cross sections and symbols for MOS transistors: N channel MOS transistor at the left, the PMOS transistor at the right hand side. 72 CHAPTER 2. SUMMARY OF SEMICONDUCTOR PHYSICS Chapter 3

Bias circuits

3.1 Introduction

In the previous chapter, two components with larger than unity power amplifying were reviewed: the bipolar junction transistor (BJT) and the MOS-transistor. Both of these transistors are strongly nonlinear, while the controlled output current only flows in one direction. To amplify an input signal (e.g. a small sinusoidal voltage) using for exam- ple the simple circuit configuration in figure 3.1, the input signal and output current can be given by: · · vBE = Vbe sin (ω t) · q vBE iC = IC0 · e kT − 1 In figure 3.1, the input signal is plotted in the lower left corner as a function of time; for the circuit in figure 3.1, this input signal equals vBE of the transistor. Graphically, the collector current of the BJT can easily be constructed by mirroring the input signal in the non-linear vBE − iC -curve. The resulting iC is shown on the right side of figure 3.1. To make the input and output signals more clearly visible, they have been zoomed in upon; the zoom factor can be estimated from the dotted lines. From the expression, and even more so from the wave shape in figure 3.1, it can be seen that the output current variation is rather small and heavily distorted. Of course, this small magnitude of the output signal can be increased by increasing the input signal’s amplitude, but this causes an even stronger distorted output signal. The main problem with this circuit is that its input signal goes positive and negative, which translates into the small and heavily distorted collector current.

3.1.1 Biasing a transistor: the bias point As a starting point for proper biasing a transistor, we assume the simple amplifier cir- cuit of figure 3.1. For the principle however, it makes no difference whether we choose a different configuration or if we use a MOS transistor, vacuum tubes or something else with electrical power gain. The general principle of such circuits is: • the output current of the transistor is a non-linear, monotonous function of the input voltage.

73 74 CHAPTER 3. BIAS CIRCUITS

iC iC

vBE t

vBE

iC

t +)

+) VDD

vBE -) -)

Figure 3.1: Principle of an amplifier circuit with (here) an NPN; the current plot is zoomed in upon.

• variation in output current due to an input voltage is (usually) converted into a varying output voltage by a resistor or some other impedance.

For transistors (and alike) there is one other property that yields a requirements on proper biasing: • the transistor’s output current can not change its sign: the current flows in just one direction. From this, it follows that to amplify an input signal with some degree of linearity, the input signal first has to be preprocessed in such a way that it does not change its sign. This can be achieved by superimposing the input voltage variation (positive and negative) on a larger DC-voltage. This latter DC-voltage or its corresponding DC- current is called the bias point of the transistor. In figure 3.2, a Common Emitter

iC iC

vBE t

i vBE C +)

VDD vbe t -)

VBE

Figure 3.2: Principle of an amplifier circuit with (here) an NPN, including bias

Circuit (CEC, see chapter 5) is presented, which is biased in a convenient bias point. The input signal is much smaller compared to the situation in figure 3.1, although 3.1. INTRODUCTION 75 the output current is significantly larger and the circuit now works considerably more linear.

3.1.2 Biasing a transistor: requirements for its bias point There are a number of requirements for a proper bias point of transistors in circuits. Because the output current of a transistor is a well defined monotonous function of its input voltage, it does not matter whether the bias point is defined in term of driving voltage or output current. For a proper and stable bias:

1. there must be a non-zero bias current through the transistor;

2. the bias current must be as insensitive as possible to variations in temperature; The circuit must continue to operate properly for varying temperatures. Vari- ations in temperature are usually due to the environment and the dissipation of the itself. If the temperature changes, the collector current (for a BJT) or the drain current (for a MOS transistor) changes significantly for a fixed input voltage (VBE respectively VGS). A suitable bias circuit decreases this sensitivity to temperature significantly.

3. the bias current has to be as insensitive as possible for the spread in character- istics of the transistor; When producing electronic components there is always some spread in the com- ponents due to tolerances in the production process1. For example the spread in current gain of a bipolar transistor can amount to 50%; the spread of IC0 is also significant. In MOS-transistors, the spread is mainly in the threshold voltage VT and in the current factor K.

4. the bias current must be such that the input signal will be amplified sufficiently linearly. The input signal of a transistor appears as a variation around the bias point. Hence, the bias point has to be chosen in such a way that the signal is amplified in a sufficiently linear fashion. Usually, this means that the variations have to be small compared to the bias2.

3.1.3 Biasing a transistor The output current of a transistor cannot change its sign and hence, to avoid high distortion levels or even clipping, a transistor must be biased at a bias current that is larger than the maximum desired current variation. Translated to the “input” of the transistor, this is equivalent to biasing the transistor with an input voltage (VGS of VBE) that is much larger than the signal voltage variations superimposed on it.

1For IC-processes, these tolerances are very small but inevitable. While implementing for example 1000√ dopant atoms in an area of 0.1 μm × 0.1 μm in a MOS-transistor, you will get a variance of 1000 ≈ 31 dopant atoms for free! This is due to the Poisson-characteristic for implementing them one by one. Furthermore, the nominal dimensions (here 0.1 μm) also spread. 2This is easily proven by making a Taylor series expansion of the non-linear transfer, and looking at the ratio between the (usually undesired) higher-order terms and the (usually desired) first order term. 76 CHAPTER 3. BIAS CIRCUITS

The bias current is set by applying a well-defined bias voltage to the transistor. Since a BJT also has a well-defined relation between the base and collector current, a bias current for a BJT can also be set using a DC base current. Concluding, we can bias a transistor by:

• BJT: setting VBE • BJT: forcing I , hence indirectly setting V = kT ln IC . C BE q IC0

αfe • BJT: forcing IE, hence also indirectly setting VBE, using also IE = IC . αfe+1

• MOS: setting VGS

• MOS: forcing IS = ID For sufficiently linear behavior, the variations on the transistor’s input voltage must be sufficiently small. The other way round: for small input voltage variations, the transistor’s operation is quite linear, and normal linear circuit analysis techniques may be used for evaluation of its (modelled) behavior3. In this chapter, we describe the methods for obtaining a well defined bias point. In chapter 4, a linear equivalent circuit for transistors is derived that describes its be- haviour about the bias point, and in chapter 8 we give some examples of amplifier circuits, where the small-signal equivalent circuits of chapter 5 are applied.

3Even without linearisation, we can calculate everything for a circuit, although the complexity increases a lot. For the circuit in figure 3.2, the collector current for a sinusoidal input signal is q·(VBE+V ·sin(ωt))/kT iC = IC0 · e be : rather unreadable and complex. The Taylor series expansion of this gives the DC-current, the fundamental harmonic and all harmonic distortion components. This method is rather cumbersome. For circuits containing multiple transistors, such calculations quickly be- come impossible. One of the solutions to cope with this impossibility to calculate analytically is simulating numerically, which usually gives very little insight: you only obtain a number and no relation which would reveal the relation (duh) between various parameters. Another solution is linearisation and analysing this linearised model; this will be addressed further on in this book. The estimation of the created error (of the harmonic distortion) can be done by also making higher-order models, which corresponds to the higher-order terms in a Taylor series; it might seem like a lot of work, but it is rather efficient. 3.2. BIASING A BJT 77

3.2 Biasing a BJT

Before we go in to the details on how to bias a BJT, it might be useful to restate its element equations:4

· ∼ q vBE iC = IC0 · e kT iC iB = αfe

As stated in §3.1.3, there are many methods for biasing a BJT that essentially boil down to forcing a certain bias current IC and its corresponding bias voltage VBE.

Biasing VBE using a DC-voltage source

The easiest method for forcing a collector current, is using a DC-voltage source that kT provides a VBE = q ln(IC /IC0), as in figure 3.2. For transistors that have to deliver a DC current this is quite sufficient. Transitors that must amplify signals should however be driven by the DC voltage source in series with a signal voltage source. In theory placing voltage sources in series is easy5, but in real circuits it is usually quite difficult. Another disadvantage of this method is that it is rather sensitive to variations in temperature due to the temperature dependence of the BJT. 1 K increase in temperature 6 already results in an increase of IC of about 7% . However, an advantage is that the base current has no effect, which means that the circuit is insensitive to variations in αfe.

Biasing by forcing a base current (ideal)

Bipolar transistors can be biased to a certain DC-current in various ways. When αfe is known, the easiest way is to “force” a DC-base current IB, which results in a DC- collector current IC . Forcing a base current IB fundamentally has to be done using a current source, as shown in figure 3.3a.

IB IC IB RB IC

+) +) +) +) +)VCC +) VCC V V VBE CE -)VBE CE -) -) -) -) -) a) b)

Figure 3.3: Biasing the collector bias current by “forcing” a base current

4 The equations are valid under the assumption that VBE >> kT/q; which is always true for any real circuit. 5In reality it is only easy if you use batteries. 6Note that for an increase of 7% per K, the increase per NK is already a factor 1.07N . 78 CHAPTER 3. BIAS CIRCUITS

In theory, this is a good method which only depends on the current gain factor αfe of the BJT. This αfe is reasonable insensitive to variations in temperature, but can vary significantly between BJTs.

Biasing by forcing a base current (non ideal) Using ideal current sources to bias BJTs is fairly easy, but also purely theoretical. In reality, there is no such thing as an ideal current source: they are always made from passive (R, L, C, ...) and/or active (MOST, BJT, ...) components. These real current sources are more or less ideal. If they are built such that the output impedance is very high (compared to the impedance of the controlled BJT base-emitter junction’s impedance), we may model the source as an ideal source. In many circuits, the ideal current source of figure 3.3a is implemented with a highly non-ideal current source: a resistor, see 3.3b. The desired value of IB is ob- tained by setting the resistor value RB to an appropriate value. This value follows from e.g. the mesh equation:

−VCC + IB · RB + VBE =0 from which we see αfe · (VCC − VBE) RB = IC

It is very difficult to exactly calculate the resulting IC or VBE is, since this equation is very (and difficultly) non-linear. Using a smart trick — or a fair assumption — we can simplify the calculations significantly: a good model (in terms of accuracy and simplicity) is that VBE of a bipolar transistor in silicon is typically between 0.6 V and 0.7 V. Now: · − ∼ αfe (VCC 0.65) RB = (3.1) IC

Although the choice of VBE ≈ 0.65 V seems arbitrary, it does give a result with reasonable accuracy:

• if the source voltage VCC is relatively large, VBE << VCC, then an error in the model of VBE will be of only little influence on the final bias current of the BJT.

• the voltage VBE will vary very little for a varying collector current, since it is logarithmically dependent on the current IC . For example if there is a 20% variation in collector current, it gives rise to just a 5 mV change in VBE.An error of factor 2 in IC is still only a change of 18 mV in VBE. It can be concluded that if the BJT is properly biased, the errors introduced by the assumption VBE ≈ 0.65V are usually small, while the calculations are simplified enormously. A benefit of this method of biasing, is that we only have to choose one resistor value. An extra advantage comes from the temperature insensitivity of αfe.IfIC were to increase (and then IB evenso), then the voltage drop across RB will increase, causing VBE to decrease, counteracting the initial change in IC . A disadvantage is the large sensitivity to spread in αfe. For unselected discrete transistors of one production series, the value of αfe can vary up to 50%, which means that for every new transistor, 3.2. BIASING A BJT 79 a different resistor value must be set to get the same IC . In a production facility this would be a huge problem while it is not in a laboratory (as long as you do not change the transistor). We will return to this spread sensitivity later. 80 CHAPTER 3. BIAS CIRCUITS

Biasing using emitter degeneration To avoid the disadvantages of the bias methods introduced above, some form of “self- correction” is applied in just about every real circuit. This self-correction assures that variations are counteracted by the circuit itself. There are many different methods for self-correction7 . The general term for these effects is feedback. In other chapters, feed- back is explained in detail. For now, we use relatively simple feedback configurations. In feedback circuits, the measured quantity is compared to the desired value, then (in this chapter) this measured difference is used to minimize the difference between the two. For a transistor circuit, we could for instance measure the output (collector or drain) current and use this measurement to get and keep this current at a specific value. Now, there are only 2 possibilities, since:

• a transistor is controlled by a voltage

• the output current of a BJT can be measured at the collector or emitter (for a MOS transistor at the source or at the drain).

Figure 3.4 displays both methods. Note: this is a simplified situation for circuits with one transistor. The principle can be expanded by feedback around any arbitrary system.

measure

measure

Figure 3.4: Principle of feedback for IC of a transistor (here NPN)

An often used feedback method measuring at the emitter (or source) side is called emitter degeneration (or source degeneration for a MOS transistor). In figure 3.5, this emitter degeneration is drawn. The circuit counteracts the influences posed by e.g. temperature change and spread in αfe. If, by any cause, the temperature were to rise while VBE is constant, the collector current — and thus the emitter current — will rise due to the temperature dependence of the BJT8. This increasing emitter current results in an increase of the voltage drop across RE. Assuming — for simplicity — a constant base voltage, then the increasing emitter voltage results in a decrease of VBE which counteracts the initial increase of the current. In the circuit of figure 3.5a, the collector current of the BJT is set by applying a base voltage, and using emitter-degeneration (feedback via the emitter current and emitter

7For example, the walking speed in a busy store: if you go too fast, you crash in to everybody, causing you to slow down. If you walk too slow, everybody crashes in to you, or steps on your heels; hence you start walking a bit faster. The important things here are: the difference (speed) and the impact of this difference (bruises). In an abandoned city street, there is no correction factor, allowing you to walk as fast as you’d like. 8 The collector current and emitter current of a BJT at constant VBE is about 7%/K. 3.2. BIASING A BJT 81

R +) B1 +) V CC VCC +) -) -) V R B R i B2 R i -) E E E E

a) b)

Figure 3.5: Biasing of IC by use of emitter degeneration voltage). The resulting bias current now follows from some straight forward math:

VB − VBE IE = RE kT IC VBE = ln q IC0 α IC = · IE α+1 α ·I V − kT ln α+1 E B q IC0 IE = RE

Simple, isn’t it? However, this does not give a relation that can be easily solved ana- lytically without using the Lambert-W-function. As stated earlier, the bias current can be determined fairly accurately by assuming (modelling) that for a BJT in operation VBE ≈ 0.65 V :

VB − VBE IE = RE VBE ≈ 0.65 V VB − 0.65 IE ≈ RE

In figure 3.5b the base voltage is set by resistors RB1 and RB2. If this voltage divider has a low impedance, then the base voltage is not (well, hardly) a function of the base current. In that case VB = RB2/(RB1 + RB2) · VCC, which gives effectively the configuration in figure 3.5a. If the voltage divider is not low ohmic, then the base current does have an effect on the base voltage. This complicates matters, and to make it worse, this is what you usually have in a real circuit. The bias current of this circuit can be calculated by using brute force, but it can be simplified using divide-and-conquer. In this case, using aTh´evenin representation of the resistive divider and the voltage source simplifies 82 CHAPTER 3. BIAS CIRCUITS things considerably. The resulting equivalent network is shown in figure 3.6. For this:

VE IE = RE VE ≈ VB − 0.65 V RB2 VB = VCC · − IB · RB1//RB2 RB1 + RB2 · V · RB2 − IE · RB1 RB2 − 0.65 CC RB1+RB2 α+1 RB1+RB2 IE = RE For this kind of self-correction systems, an initial increase in collector current (at

RB1 +) +) REQ V VCC CC -) -) +) RB2 VEQ RE iE RE iE -)

Figure 3.6: Thevenin« representation of the base bias circuit: more simple calculation of bias currents. constant VBE) causes VBE to decrease slightly (by an increase of VE) which coun- teracts the initial increase. The final result is a just slightly increasing current instead of (without feedback) a very large increase of the bias current. This type of feedback can be used to decrease bias sensitivity for all kinds of inaccuracies, due to spread, temperature, voltages, etc.

Example: suppressing variations in temperature Q: A transistor is biased with a collector current of 1 mA. If the temperature increases with 100 K, the vBE at constant current decreases 200mV: -2mV/K. Calculate the value of RE if the collector current may not increase more than 10%. A: For the given temperature shift:

−1 ΔVBE =ΔT ·−2mV· K = −200 mV

Now, both the variation in voltage and the allowed current variation are known. From −Δ Ohm’s law, R = VBE with ΔI =0.1mAyielding R =2kΩ. Note that this E ΔIE E E is the lowest value for RE; for larger values of RE, the circuit works even better.

Note: For the lowest value of RE, there already is a voltage drop of 2 V for a bias current of 1 mA. More insensitivity requires a higher resistance, but also requires a higher voltage. Hence, every solution is a compromise. 3.3. BIASING A MOS-TRANSISTOR 83

3.3 Biasing a MOS-transistor

Everything derived so-far for the BJT, is also applicable to the MOS-transistor. Here we again limit ourselves to the N-channel enhancement MOS transistor (which is off for VGS =0). For applications in amplifier circuits, the MOST is usually biased in saturation, causing the drain current to depend only — as a good approximation — on the gate-source voltage, not on the drain-source voltage:

iG =0 1 2 i = K · (v − V ) for v >V v  v − V (3.2) D 2 GS T GS T DS GS T

The bias current ID can be set by a proper VGS using a voltage source. The main prob- lem with biasing using a voltage source is — just like with the BJT — that variations (i.e. the signal that has to be amplified) are hard to superimpose. Therefore, usually the bias is set by means of a voltage divider from the supply voltage, as shown in figure 3.7.

R R i G1 D D +)

+) VDD VDS -) -) RG2

Figure 3.7: Biasing a drain current ID using a voltage VGS

The MOS-transistor is, just as a diode and BJT, sensitive to changes in temperature. For example, the threshold voltage VT has a temperature coefficient roughly equal to −1mV · K−1. In the K-factor, the carrier mobility μ has the largest temperature de- pendence, which is about proportional to T−2 The change in threshold voltage causes iD to rise with increasing temperature, while the decrease of the K-factor tends to decrease iD. Furthermore, the parameters K and VT spread. Using the same feedback method as for the BJT, here using source degeneration, their influence can be limited.

Example: Biasing a MOS-transistor 2 Q: Given is K = 0.5 mA/V and VT =1V. Calculate VGS for ID =1mA.

A: A MOST is an element with (ideally) a quadratic relation between ID and VGS. Using the element equation of a MOS-transistor in strong inversion and in saturation, we immediately find the answer:  2 · I V = V ± D GS T K

Substituting the values gives VGS =3V ; verify that a second solution, VGS =-1V, 84 CHAPTER 3. BIAS CIRCUITS has no physical significance and is excluded by the conditions in (3.2). For a supply voltage of 10 V, we then get RG1 =7/3 · RG2.

Biasing by source degeneration

Earlier in this chapter, it was shown that it is possible to make the bias point of a BJT much less sensitive for all kinds of variations by using feedback. For the BJT, this is usually implemented by placing a resistor in series with the emitter: using so-called emitter degeneration. For the MOS transistor the same can be done, yielding the so- called source degeneration. The effect of a source degeneration is analysed briefly below.

R R i G1 D D +)

+) VDD VDS -) -) RG2

RS

Figure 3.8: Biasing using source degeneration

In figure 3.8, a bias circuit of a MOS-transistor is presented. Figure 3.9a and 3.9b show the effect of source degeneration graphically. In the figure the IS-VGS curve and the curve for IRS-(VG − VRS) are shown. Because IS=IRS and VGS=(VG − VRS), the intersection of the two curves is the solution of this bias circuit. Figure 3.9a shows the effect of various values of K graphically. Without source degeneration the drain current increases in proportion to K, yielding the 3 points on the vertical line for fixed VGS. The current then changes between a and a’. With source degeneration the 3 points on the RS-curve result, which show a significant lower change in drain current: between b and b’.

Example: Biasing a MOS-transistor with source degeneration Q: Determine the required value for the source degeneration resistor RS in figure 3.8 to get a drain current equal to ID?

A: This can be done in a number of ways, for instance with the very elegant brute force 3.3. BIASING A MOS-TRANSISTOR 85

kV>VT3 T2 T1

iD iD

a a' f b b' tan = 1/RS tanf = 1/R f S f V.R V.R DD G2 VT 0 DD G2 0 V (R +R ) GS G1 G2 VGS (RG1 +R G2 ) (a) (b)

Figure 3.9: Effect of RS on the bias current by variation in (a) K, and (b) V T

method:

VS RS = ID VS = VG − VGS RG2 VG = VDD · RG1 + RG2 2ID VGS = VT + K  RG2 2ID VDD · − VT − RG1+RG2 K RS = ID

Since the transistor has to be biased at a value of ID, we can freely assume that this value is a priori known. In a similar way, for a given RS, we can calculate the resulting ID, although it gives us a nice second-order equation for which just one solution is valid. 86 CHAPTER 3. BIAS CIRCUITS Chapter 4

Small-signal equivalent circuits

4.1 Introduction

Many electronic systems require voltage gain, current gain or in general power gain. Note that voltage gain or current gain larger than unity alone — without power gain larger than 1 — can easily and linearly be achieved using passive components such as capacitors and inductors or with transformers. However, if power gain (larger than unity) is also required then components with power gain must be used. In this book we assume BJTs and MOS transistors for this. Components with power gain are fun- damentally nonlinear1 and calculations with them easily become cumbersome or even impossible. In this chapter, we aim at linear amplifiers2. Linear amplifiers are basic building blocks for many electronic systems. Non-linear amplifiers are also widely applied in e.g. digital circuitry and in circuits that do analog-to-digital conversions, but these are usually a straightforward extension of linear(ized) amplifiers, and are not addressed in this book. Since transistors are highly nonlinear, we must put in some effort to make (sufficiently) linear circuits using transistors. In chapter 6, feedback is used to linearize — or optimize in some other aspect — the behavior of amplifiers, at the cost of voltage gain. A high voltage gain is then a prerequisite to use feedback. In this chapter we do not use feedback but only focus on amplifier properties without feedback. For sufficiently linear behavior this implies that we can only use a small portion of the nonlinear characteristics (vGS − iD or vBE − iC ). To reduce the complexity of the calculations and to gain insight, linear models for the transistors will be introduced and used in calculations. These linear models obviously are only valid around the corresponding bias point of the transistors; these linear models are called small-signal equivalent circuits (SSEC).

1Components with power gain actually just add power, from a power supply to its output. Because the signal frequency of the power supply is usually not equal to the frequency of the output signal, this component must be nonlinear. 2Making linear amplifiers using both linear and nonlinear components is quite hard. In general an amplifier including nonlinear components is nonlinear, but the behavior may be sufficiently linear in some range to name it a linear amplifier.

87 88 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS

4.2 Linear model for transistors

Using the nonlinear element equations in analyses creates many cumbersome calcula- tions, and at best yields huge equations that offer no insight. Using linearised models of “real” nonlinear element equations gives us:

• a reasonable (linear) approximation of the behavior of the nonlinear component or circuit around some bias point,

• a limited validity of the results (increasingly smaller as we want more accuracy),

• a linear circuit which can be easily analyzed,

• equations that can be interpreted (hence relations),

• insight.

The circuit we obtain after replacing all nonlinear components by their linear approxi- mation using a first-order Taylor approximation is called the “linear equivalent circuit”. Note that we now get an equivalent circuit that contains many DC sources (0th order Taylor terms) and many linear impedances and many linear controlled sources (1st order Taylor terms). The DC sources in this equivalent circuit are only relevant for the DC signals of the circuit3. The impedances and controlled sources in the equivalent circuit are relevant for both the DC signal and the actual signal that is processed by the amplifier. If we for instance take a simple BJT amplifier, where the element equations are approximated with a first-order Taylor expansion, we get the linear equivalent circuit shown in figure 4.1.

+) +) dv -1 +) BE diC dv V vBE I CE +) CC dvBE C vOUT v diB ()diC VCC OUT vin vin -) -) VBE -) -)

-1 dvBE diC dv vBE I CE dvBE C diB ()diC = VBE

Figure 4.1: “linear equivalent circuit”: the principle

In the circuit on the left hand side of figure 4.1, the BJT is replaced by a 1st order Taylor expansion (shown in the middle of the figure) to get the linear equivalent circuit on the right hand side.

3From a fundamental point of view a DC signal is not a signal because it does not contain any infor- mation: the signal is DC. Yes, I know, a sine also doesn’t contain information because if I know the sine now, then I know what it will do in the future and what it did in the past. Still in some way we use sine waves as fundamental signals. 4.2. LINEAR MODEL FOR TRANSISTORS 89

As stated above, a nonlinear element is approximated with a linear equivalent cir- cuit to calculate the signal transfer function: the output signal resulting from the input signal. Now, using the principle of superposition it follows that:

To calculate signal related properties of the circuit (e.g. signal transfer function, impedances) contributions of DC sources are irrelevant: they don’t need to be calculated which corresponds to setting any DC source to 0 (0V or 0A).

If we set all DC sources in figure 4.1b to zero, we get a “linear equivalent circuit”, with only (AC) signals that can be used to calculate any property of the circuit relate to AC signals. In reality, we never use the equivalent of figure 4.1b, because of the needless presence of zero-valued DC voltage and current sources.

A logical next step is replacing these zero-values DC sources by their equivalent impedances: a short for a DC voltage source and an open for a DC current source. The equivalent circuit without the DC sources is called the “small-signal equiva- lent circuit” (SSEC).

For the circuit in 4.1a, the small-signal equivalent circuit is shown on the right hand side of figure 4.2.

+) +)

dvBE di -1 -1 +) v C dvCE +) diC dv BE I dvBE vBE CE dvBE C vout dv v diB ( di C ) VCC BE out vin ( di C ) vin diB

-) VBE -) -) -)

dV i DC di =0 V 0 W DC ==

dI + DC =0 dv W I 8 DC v == -

Figure 4.2: “Small-signal equivalent circuit” of the circuit in figure 4.1 90 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS

4.2.1 SSEC of a BJT

A small-signal equivalent model of a BJT follows from the most general expression for the base and collector current of a BJT:

qvBE vCE iC = IC0(e kT − 1)(1 + ) (4.1) VA 1 iB = · iC (4.2) αfe

This equation covers the major part of the response of the BJT; only the area where VCE < 0.1V is not covered. For almost all applications it is sufficient to limit the operation range to vCE > 0.1V . A Taylor expansion for iC and iB about the point iC = IC respectively iB = IB then gives:

IC + d(iC )=iC (VBE,VCE) 1 ∂iC ∂iC + · (vBE − VBE)+ · (vCE − VCE) + ... 1! ∂vBE ∂vCE IB + d(iB )=iB(VBE,VCE) 1 ∂iB ∂iB + · (vBE − VBE)+ · (vCE − VCE) + ... 1! ∂vBE ∂vCE

The zeroth-order term corresponds to the bias point settings, and the first derivative represents the “first-order” or linear approximation. The variations are assumed to be so small that the second and higher orders are negligible compared to the first-order term. For small variations, the terms (vBE − VBE) and (vCE − VCE) can be replaced by their differential notations; d(vBE) and d(vCE). This can be written even shorter by replacing the differential terms by “small-signal symbols”: for example the base voltage variation d(vBE) ≡ vbe and the collector current variation d(iC ) ≡ ic. Con- centrating on the relation between the variations, we get:

∂iC ∂iC d(iC ) ≡ ic = · vbe + · vce (4.3) ∂vBE ∂vCE ∂iB ∂iB d(iB) ≡ ib = · vbe + · vce ∂vBE ∂vCE

For performing calculations on circuits containing transistor or diodes, it is convenient to have equivalent circuits for the non-linear components. The small-signal equivalent circuit of a transistor, corresponding to the equations in (4.3), is shown in figure 4.3. With the SSEC of a BJT as presented in figure 4.3, to construct a SSEC of a circuit, every BJT should be replaced by 2 resistors and 2 voltage controlled current sources. This means that the SSEC contains much more components than the original circuit. The advantage is of course that the SSEC is linear whereas the original circuit was inherently nonlinear. For most applications using the full SSEC of a BJT is not nec- essary, some components may be neglected and hence may be left of the SSEC. The next list enumerates the components in descending order of importance: 4.2. LINEAR MODEL FOR TRANSISTORS 91

B C ib ic + + d . iB vce dv 1 CE 1 d v d iC v be iB ce di dv dv v . C CE BE be dv BE

- - EE

Figure 4.3: Small-signal equivalent circuit of a BJT

• the controlled output current is THE reason to use a transistor: it may not be left out except if you destroyed the transistor. The signal variation forced by this source equals vbe · diC /dvBE. • the base current is just as fundamental, but unwanted. The (small-signal) base current as a response to a (small-signal) base-emitter voltage corresponds to the resistor between B and E and may not be omitted. • the output resistance, the resistor between C and E, only is relevant if an external load impedance of the BJT is high ohmic compared to the transistor’s output impedance. Only in that case the transistor’s output resistor must be included, and it can freely — and should to limit calculation complexity — be excluded otherwise. • the input current as a result from output voltage variations is usually small and can usually be neglected. Hence the SSEC to be used is usually the one shown on the right hand side in figure 4.4. Only if the load of the transistor is very high ohmic (such as using a DC current source as load) then the SSEC on the left hand side should be used.

BBCC ib ic ib ic + + + +

1 1 1 di di v di v . C d v v di v . C v be B be dv iC ce be B be dv ce dv BE dv dv BE BE CE BE

- - - - EEEE

Figure 4.4: SSEC of the BJT to be used.

Notational simplification The notation used in (4.3) and in figure 4.4 is not that easy to read. For that reason meaningful short hand notations are introduced for some properties of the components in the SSEC of the BJT: • the main use of the BJT is to create an output current variation from its input voltage variation. The ratio between the two is called the gm:

∂iC ic gm = ≡ (4.4) ∂vBE vbe 92 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS

• the base current is fundamental, and is a factor “current gain” or αfe smaller than the output current. Consequently the corresponding resistor between B and

E has a value: −1 ∂iC αfe rbe = = (4.5) αfe · ∂vBE gm • in many applications, achieving a high voltage gain is very important. It can be derived that the maximum achievable voltage gain factor using passive load impedances equals the product of the gm and the transistr’s output resistance rce; for this product frequently the symbol μ is used, leading to: μ rce ≡ (4.6) gm

Using these short hand notations, the resulting SSEC for a BJT is shown below. As stated before, the most simple version that can be used, should be used. This usually means that the SSEC of choice consists of only 2 linear components: the voltage controlled current source that represents the essence of the transistor and the input resistor that represents the fundamentally present main unwanted effect: input current.

ib ic ib ic B C B C

be be

v r ·v r v r ·v be be ce vce be be vce

m m

g g

E EEE a a fe m fe rbe=rrce= be= gm gm gm

Figure 4.5: Small-signal equivalent circuit of the BJT.

4.2.2 SSEC of a MOS transistor The previous section described the (linear) small-signal equivalent circuit for a BJT; the work in that section is repeated here to get the SSEC for a MOS transistor. Starting with the element equations of a MOS transistor (the drain current equations are for strong inversion saturation and strong inversion linear region respectively):

iG = iG(vGS ,vDS)=0 1 2 iD,saturation = iD(vGS,vDS)= K · (vGS − VT ) (1 + λvDS) (4.7) 2 1 i = i (v ,v )=K · (v − V ) · v − v2 D,linear D GS DS GS T DS 2 DS 4.2. LINEAR MODEL FOR TRANSISTORS 93

In the expression for the saturation region, the term (1 + λvDS) is only relevant if the external load impedance of the MOS transistor is high ohmic compared to the transistor’s output impedance. In all other cases we can freely neglect this term. The SSEC for the MOS transistor now follows from a first-order Taylor series expansion:

∂iG ∂iG d(iG)=ig = · vgs + · vds ∂vGS ∂vDS ∂iD ∂iD d(iD)=id = · vgs + · vds (4.8) ∂vGS ∂vDS

Just like BJTs and vacuum tubes, a MOS transistor is essentially a voltage con- trolled current source. The parameter ∂iD is the most important small-signal pa- ∂vGS rameter; just like for the BJT this parameter is called transconductance gm. The pa- rameter ∂iD corresponds to the output conductance and is denoted with the symbol ∂vDS gds =1/rds. For small-signals (variations) in drain current:

id = gm · vgs + gds · vds (4.9) with

gm = K (vGS − VT ) · (1 + λ · vDS) g 1 2 g ≡ m = K(v − V ) · λ (4.10) ds μ 2 GS T

These SSEC parameters are specific for the bias point for the transistor: the SSEC parameters follow from a Taylor series approximation about the bias point. Because of this, in the equations above, vGS = VGS and vDS = VDS. The resulting small-signal equivalent circuit of a MOS transistor is shown in figure 4.6a. The effect of the output resistance is often neglected; then the SSEC for a MOS transistor is given in figure 4.6b.

ig id ig id GDGD

v rds =1/gds

mgs mgs

g gv

S S S S (a) (b)

Figure 4.6: Small-signal equivalent circuit of a MOST 94 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS

4.2.3 Small-signal parameters

For calculations using the SSEC of a BJT or MOS transistor, the various small-signal parameters (mainly the gm, the rbe and only the rce or rds when required to) must be known. These small-signal parameters follow from the bias point and the element equations.

BJT

The most important small-signal parameter of the BJT is the transconductance gm. From its definition g = ∂iC and the element equation we get: m ∂vBE · q q vBE vCE gm = · IC0 · e kT · 1+ kT VA · q q vBE g = · I 0 · e kT m kT C ∼ q = I (4.11) kT C where the first element equation results in a finite rce; whenever possible the second — simplified — element equation will be used. It can be seen that the transconductance of a BJT is proportional to its DC (bias) current IC . For the rbe:

−1 ib αfe r ≡ = · be q vBE q vbe · kT · IC0 e kT α = fe (4.12) gm

The output resistance of a BJT — rce — follows also from the element equations and the bias conditions. However in this book we use either a prespecified rce or a value related to a prespecified maximum achievable gain: rce = μ/gm.

MOS transistor

For the MOS transistor, similar quite simple relations can be derived for the small- signal parameters. From the element equations for operation in strong inversion and saturation (for now the linear region is neglected):

1 i = · K · (v − V )2 · (1 + λv ) (4.13) D 2 GS T DS iG =0

The most essential parameter — the transconductance — can be calculated to be gm = K · (vGS − VT ) · (1 + λvDS). The last term in this expression is preferably neglected, yielding gm = K·(vGS −VT ). This gm follows from the bias VGS (and VDS if we have to include a finite output resistance) and from the transistor parameter K. Reusing the transistor’s element equation, this equation for gm can be rewritten into 3 useful forms 4.2. LINEAR MODEL FOR TRANSISTORS 95 that can be used to calculate the transconductance: · · · gm = 2K ID (1 + λ VDS) ∼ = 2KID (4.14)

= K · (VGS − VT ) 2I = D VGS − VT Which of these equations is most useful, depends entirely on which properties are known; usually 2 out of the 3 possible parameters {K, VGS ,ID} are known and can be substituted in one of the 3 gm-equations. 96 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS

4.3 Amplifier circuits

Chapter 3 explained that a transistor must be biased properly in some DC bias point to be able to perform. The current chapter deals with small input signal variations about the bias point to get some output current variations that can — e.g. using Ohm’s Law — be transformed into output voltage variations. For calculations of the latter we introduced the small-signal equivalent circuits: linear models around the bias point that deal with only signal variations around the bias conditions.

4.3.1 Coupling the input and output In a real circuit applying variations onto the input of a transistor can be done in multiple ways:

• the most easy way may seem driving a transistor with a series connected DC voltage source and AC signal source. However, putting two voltage sources in series proves to be hard in actual circuitry.

• something else

One of the most straightforward ways of that “something else” is used throughout this book. Noting that the bias voltage is fundamentally DC and a signal fundamentally is AC opens possibilities to add voltages. This is worked out using a circuit from chapter 3, now used to amplify an input voltage:

+

R B RC

C ? VCC B + ? Q1

E RL vi (t)

- - Figure 4.7: Amplifier with a BJT: the input signal is coupled to the amplifier via some undefined thing marked “?”

Coupling the AC voltage source directly between B and E would prevent (independent of the input signal) us from realising a suitable bias for the transistor. Already, we have a number of requirements with respect to coupling the input signal:

• coupling the input signal may not disrupt the DC bias

• the input signal must, preferably, not be attenuated by the coupling

This seems to be a contradiction: on one hand, the input signal should be coupled onto the transistors’ input terminals, while on the on the other hand it shouldn’t. The solution is to satisfy both requirements at the same time, but not for the same frequency. We can now redefine the requirements: 4.3. AMPLIFIER CIRCUITS 97

• for DC, the coupling must have a high impedance in order not to disturb or compromise proper (DC) biasing settings

• for signal frequencies, the coupling must have a low impedance Again a number of solutions can be made for this. The most straightforward is the usage of a coupling component that is high ohmic for f =0Hz and is low ohmic for signal frequencies: a capacitor as coupling component could implement this4. The value of the capacitor is not of importance for the blocking of the DC current: the impedance of every capacitor for DC current is infinitely high. For passing the AC voltage from the source to the amplifier, the size of the capacitor is most definitely of importance as it creates — together with the input resistance of the circuit — a first-order high-pass characteristic. If the -3dB cutoff frequency of that characteristic is sufficiently below the lowest signal frequency the coupling can be considered good for any signal frequency5.

Example For an amplifier with an input resistance of rin=1 kΩ and an input signal between 20 Hz and 20 MHz:

• if the attenuation due to the coupling has to be smaller than 3 dB, a minimum value for the coupling capacitor follows the signal transfer function: v H(jω)= in vsource jω · r C = in couple 1+jω · rinCcouple Being in a standard form for a first order high pass function (see section 0.5.12), it directly follows that ω0 =1/(rinCcouple) and hence 1 Ccouple  =8μF 2π · fminimum · rin

• if the amplifier has to settle within 1% of its bias within 1s after switching on source, the maximum value for the coupling capacitor is: −t r C vC (t)=vC (t = ∞) · 1 − e in couple

( ) vC t  0.99 ⇒ vC (t=∞) t Ccouple  = 220 μF rin · ln(100)

4We could also make the bias circuit inductive in combination with a signal source with a non-zero series resistance, but in real circuits that is usually not as easy as using one capacitor. 5This would indicate that “larger is better” and that is the case for ideal capacitors if you would consider only the signal attenuation. A drawback of using very large capacitors is that reaching a steady- state DC bias condition may take very long. For example if you set the cutoff frequency at 1mHz then you would easily have to accept a startup time of several times 1000s... 98 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS

At the output of the amplifier, we have a similar problem. At the collector of the transistor, we have a DC bias voltage plus an AC output voltage. Usually, we want to pass only the signal to the load; also here, a capacitor can give us the desired effect.

4.3.2 SSEC of a basic amplifier circuit

To calculate small-signal properties of circuits — such as input impedance, output impedance or signal gain —we need a SSEC of the total circuit. This SSEC includes biasing, signal coupling, SSECs of nonlinear components and sources and more. This (linear) SSEC then can be analyzed using all the theorems, rules, tips and tricks of linear network analysis. We will now create a SSEC for the circuit in figure 4.7, and then will do some derivations of small-signal parameters. Figure 4.8 shows a step-by- step derivation of a SSEC:

RB RC + VCC Co - A) + R Ci L vin -

RB RC + B) 0 B C Co -

+ C m RL

i g

v . in r - be be

v E

B Co C) Ci C

+ m RC RL

g vin . RB r - be be

v E

B D) C

+ m RC RL

g vin . RB r - be be

v E

Figure 4.8: Amplifier circuit with a BJT, and step-by-step derivation of its SSEC

In the first step the BJT is replaced by its SSEC; in this step it is important to keep track of the original position of the base, collector and emitter nodes. Also in this first step DC sources are set to 0. The second step is redrawing and simplifying the SSEC obtained after the first step: this usually boils down to redrawing. The SSEC in 4.3. AMPLIFIER CIRCUITS 99

figure 4.8c is a correct small-signal equivalent that can be used to calculate frequency- dependent gain, input impedance and output impedance. Usually it is implicitly assumed that coupling are sufficiently low ohmic at signal frequencies. In this context “sufficiently low ohmic” means that at signal frequencies the voltage drop across the capacitors is near zero and hence the capacitors can be modelled as shorts. In that case the SSEC in figure 4.8c can be further simplified to the SSEC in figure 4.8d. Using the SSEC in figure 4.8d, now some small-signal properties will be derived for the circuit in figure 4.8a:

• input impedance The input impedance of an amplifier is the impedance we “see” when looking from the controlling source “into” the input port of the circuit. According to Mr. Ohm, zin = vin/iin or for a purely resistive input impedance rin = vin/iin. For example when driving the circuit with a voltage source, the current iin(vin) must be calculated to get the input impedance. For the SSEC in figure 4.8d or figure 4.9a this yields rin = αfe/gm // RB. Note that driving the circuit with a current source yields the same result, as do other equally correct analysis methods to calculate an impedance.

RB RC iin

iin +) V CC +)

+) m a) v +) OUT r .g R be R v

B be C out

vin vin v -) -) -) -)

RB RC iout iout V +) CC b) +)

m vOUT

.g

R be R v B rbe C out -) v -)

Figure 4.9: Amplifier circuit with BJT and SSECs; a) determining input resistance b) deter- mining output resistance

• output impedance The output impedance of the amplifier is the impedance we “see” when looking “into” the output port of the circuit. For linear networks and hence for SSECs, this impedance can be calculated in a number of ways. One way is to use the definition of small-signal impedance, giving to rout = vout/iout, where the only controlling source is now driving the output port, and all other independent sources are set to 0. An example of this is given in figure 4.9b. Other meth- ods are, among others, using rout = vout,open/iout,short−circuit (only applicable to linear circuits) or loading the output with a known impedance and calculating the output resistance from there (using the principle of a voltage divider). For the given circuit, we have rout = RC // μ/gm. 100 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS

• voltage gain: The voltage gain (i.e. voltage amplification factor) of the amplifier can readily be evaluated — from figure 4.9a — to be Av ≡ vout/vin = −gm · RC . For the voltage gain of a basic amplifier circuit with a MOS transistor, the same princi- ples can be applied. The circuit itself including all biasing components is presented in figure 4.10a; the SSEC is shown in figure 4.10b. Using the SSEC, we can easily derive

RG1 RD iin

iin +) V DD +)

+) +) m v RG2 OUT .g R v R gs D out vin vin G1 v RG2 -) -) -) -)

Figure 4.10: Amplifier circuit with MOST, and its SSEC that the input impedance is RG1//RG2, that the output impedance is equal to RD and that the voltage gain equals Av = −gm · RD.

Design procedure; an example

We wish to design an amplifier with a BJT having a voltage gain of |Au| = 100. Note that the basic amplifier circuit of figure 4.8a is an inverting amplifier for which Av = −gm ·RC . We are usually not interested in its inverting properties, so we simply work with |Au| = gm · RC which leads to gm · RC =100; then we get: ∼ q g = I m kT C ∼ q |A | = I R v kT C C where IC is the collector bias current. For a voltage gain |Av| = 100 it follows that the voltage drop across RC is Av · kT/q which amounts to 2.58V at room temperature. The minimum supply voltage for this amplifier now depends on the maximum input signal, the voltage gain and the minimum vCE for proper operation of the BJT: kT V ≥|A |· + |A |·V +0.1V DD v q v in Note that for sufficiently linear behavior of this amplifier the signal must be small. In this context “sufficiently small” means that the variation in collector current (ic)be much smaller than the DC bias collector current IC . Using more complex circuit topologies, the gain-supply voltage trade off can be circumvented: then high gain at low supply voltage can be obtained. This is at the cost of components and usually also at the cost of power consumption and maximum frequency of operation. 4.3. AMPLIFIER CIRCUITS 101

We will look at the design of transistor circuits in more detail in chapter 5, while using the SSECs from this chapter. 102 CHAPTER 4. SMALL-SIGNAL EQUIVALENT CIRCUITS Chapter 5

Amplifier circuits

5.1 Introduction

In chapter 4, we analyzed two simple amplifier circuits. Figure 4.8 shows a very basic amplifier with one BJT (and its SSEC). In that circuit, the transistor is driven at the vbe port while the output port is vce. This may be somewhat hard to see in the original circuit, but for sure it is very visible in its SSEC. The MOS version is shown in figure 4.10, in which the transistor is driven at the vgs port and where the output is at the vds port. These circuits are both repeated in figure 5.1

RB RC

+) +)

V +) m +) CC rbe v .g R v OUT R be C out vin B v vin -) -) -) -)

RG1 RD

+) V DD +)

+) +) m v RG2 OUT .g R v R gs D out vin vin G1 v RG2 -) -) -) -)

Figure 5.1: Amplifier circuit with BJT and MOST from chapter 4, with their SSEC

In general a transistor is a three terminal device that behaves like a (nonlinear) voltage controlled current source (VCCS). To be able to behave like a VCCS a device must be at least a two-port device. From a fundamental point of view a two-port device has 4 terminals, whereas a transistor has just 3. This may seem a contradiction but it is not: it only implies that the two ports share one terminal. For a circuit using a BJT this yields: • a common-emitter circuit (CEC) in which the emitter is shared by the input port

103 104 CHAPTER 5. AMPLIFIER CIRCUITS

and output port. Due to the nature of the BJT, the input port must be formed by the vbe voltage while the output port voltage is vce. The emitter node is common in the input and output port, hence the name CEC. • a common-base circuit (CBC) in which the base is shared by the input port and output port. So the input port must be formed by the vbe voltage while the output port voltage is vcb. • a common-collector circuit (CCC) in which the collector is shared by the input port and output port. So the input port must be formed by vbc while the output port voltage is vce. All these basic circuits using one BJT have distinct small-signal properties. The figure below lists the 3 possible configurations1, starting from the generalized bias circuit for a BJT:

RB1 RC RB1

+) +) +) C1 VCC VCC

vOUT -) +) -) v vin OUT RB2 RE -) RB2 RE -)

a) b)

RB1 RC RB1 RC

+) +) +) +) C1 VCC VCC C2 vOUT -) vOUT -) vin vin RB2 RE -) RE -) C1 RB2 C2

c) d)

Figure 5.2: Amplifier circuits with a BJT: b) is the CCC, c) is the CEC and d) is the CBC.

The generalized bias circuit for one BJT is shown in figure 5.2a. In all circuits either the base or the emitter or both are directly driven by the input signal; driving is done via (DC blocking) capacitances. Drawing one voltage mesh including the input signal and one transistor node pair and drawing one voltage mesh that includes both the output port of the circuit and one transistor node pair directly reveals the common node (in both meshes). In this way figure 5.2b represents the common-collector circuit (CCC), while figure 5.2c is the common-emitter circuit (CEC) and figure 5.2d is the common base circuit (CBC). Obviously the BJT and MOS counterparts are quite similar; for a circuit using a MOS transistor this yields:

1It is assumed that the input voltage source is grounded with one of its terminals. If we could make a floating signal voltage source — this is at least very hard — we could get one more configuration. 5.1. INTRODUCTION 105

• a common-source circuit (CSC) in which the source is shared by the input port and output port. Due to the nature of the MOS transistor, the input port must be formed by the vgs voltage while the output port voltage is vds. • a common-gate circuit (CGC) in which the gate is shared by the input port and output port. So the input port must be formed by the vgs voltage while the output port voltage is vdg. • a common-drain circuit (CDC) in which the drain is shared by the input port and output port. So the input port must be formed by vgd while the output port voltage is vds. These three configurations are shown in figures 5.3c, d and b respectively.

RG1 RD RG1

+) +) +) C1 VDD VDD

vOUT -) +) -) v vin OUT RG2 RS -) RG2 RS -)

a) b)

RG1 RD RG1 RD

+) +) +) +) C1 VDD VDD C2 vOUT -) vOUT -) vin vin RG2 RS -) RS -) C1 RG2 C2

c) d)

Figure 5.3: Amplifier circuits with a MOS transistor: b) is the CDC, c) is the CSC and d) is the CGC. 106 CHAPTER 5. AMPLIFIER CIRCUITS

5.1.1 The common-base circuit, CBC The CBC is shown again in figure 5.4a; figure 5.4b gives its small-signal equivalent circuit for frequencies where the impedances 1/jωC1 and 1/jωC2 are small enough to be negligible compared to the other impedances. Finally, figure 5.4c gives a more comprehensible equivalent circuit, obtained by cleaning up the intermediate SSEC as much as possible. This SSEC is now used to calculate some small-signal properties of the CBC: its input and output resistance, as well as the small signal voltage gain.

RB1 RC R R B1 C +) B C be v

a . gm gm C vOUT 2 E vOUT

vin +) RB2 RE R RE v -) C B2 in 1 -)

a) b) . gm vbe iin E C +) +) a R R vin gm ib E C vOUT -) -) B c)

Figure 5.4: a: CBC and small-signal equivalent circuits b: first step c: final SSEC

The small-signal input resistance is, by definition, the quotient of vin and iin.We can choose whether we use a driving current or voltage source. Using the brute force method, and using a driving voltage source:

vin rin = iin iin = ire − ib − gm · vbe vin ire = RE g · v i = m be b α vbe = −vin

Substituting these equations gives for the input resistance:2

αfe 1 rin = RE// // (5.1) gm gm The input resistance of a CBC consists of three contributions. For decent BJTs, the current gain factor αfe is quite large, resulting in an αfe/gm term that is negligible

2The given relation is in its most simple notation. However, we can write this equation in infinitely many equally correct ways. 5.1. INTRODUCTION 107 compared to the 1/gm term. Using the relation for transconductance of a BJT it follows that for the terms RE and 1/gm:

VRE 1 1 ∼ RE RE = and = = IE gm 40 · IC 40 · VRE from which we see both terms are identical in size for a DC bias voltage drop across RE equal to as low as 25 mV. In virtually all real CBC circuits, this voltage drop is much larger, usually e some tenths of a volt, which gives an input resistance ∼ 1 rin = (5.2) gm

The small-signal voltage transfer is the ratio of voltage variation at the output due to a variation at the input. Again, using the brute force method, we have: v H = out vin vout = −ic · Rc

ic = gm · vbe

vbe = −vin vout H = = gm · RC (5.3) vin Note that the input and output voltages are in phase, in contrast with the CEC.

The small-signal output resistance of a circuit can be obtained in a number of ways. For linear circuits (as the SSEC), the three methods which are used most often are:  v |  out rout→∞  rout = |  iout r =0 out  for equal vin  vout,forced  rout =  iout =0 =0  vin and iin  vout  rout =  i out,forced vin=0 and iin=0 Application of the first method, while using equation (5.3) gives: v | = g · R · v out rout→∞ m C in i | = g · v out rout=0 m in rout = RC This method requires 2 calculations, while the output of the amplifier at the other two methods is driven with a current or voltage source yielding the output impedance directly. These latter two methods are mostly used in this book.

For a CBC, rin is low and rout is high: the circuit acts like a “current-in-current- out” amplifier. 108 CHAPTER 5. AMPLIFIER CIRCUITS

5.1.2 The common-gate circuit, CGC The MOS equivalent of the CBC is called a common-gate circuit, CGC, and is shown in figure 5.5. Its input resistance, transfer and output resistance will now be derived using figure 5.5c:

RG1 RD R R G1 D +) G D gs v . gm C vOUT 2 S vOUT

vin +) RG2 RS R RS v -) C G2 in 1 -)

a) b) . gm vgs iin S D +) +) R R vin S D vOUT -) -) G c)

Figure 5.5: CGC and its small-signal equivalent circuit

The small-signal input resistance of the circuit can be calculated using e.g. the brute force method. Driving the output with a voltage source: vin rin = iin iin = irs − gm · vgs vg irs = RS vgs = −vin 1 ∼ 1 rin = RS// = gm gm

The small-signal transfer can be derived that same way as for the CBC: H = vout/vin = −gm · vgs · RD while vgs = −vin: vout = gm · RD (5.4) vin

The small-signal output resistance can readily be derived driving the output using e.g. a current source. Note that then this is the only independent source: vout rout = iout vout =(iout − id) · RD

id = gm · vgs =0

rout = RD 5.1. INTRODUCTION 109

5.1.3 The common-collector circuit, CCC

The CEC and CBC — and their MOS equivalents CSC and CGC — are circuit con- figurations in which the transistors are driven at their input port: between base and emitter for the BJT, and between gate and source for the MOS transistors. Another useful configuration is the common-collector circuit where the transistor is driven at one node of the input and one node of the output port. Using a type of local feedback of the output current to create the voltage at the non-driven input port node, a useful circuit is created. The circuit schematic of the CCC with an NPN is given in figure 5.6a.

RB1 RB1 B C be v C a . 1 gm +) gm E v +) v in v OUT -) OUT R R R B2 E RB2 E -) vin

a) b) a iin gm B E +) +)

be

i v b . v RE v in gm OUT R //R -) B2 B1 -) C c)

Figure 5.6: CCC and the small-signal equivalent circuit

Feedback from output current to the non-driven input node is accomplished by a resistor (or impedance) RE. The higher its resistance, the higher the feedback factor; the highest attainable value is ∞Ω using a DC current source instead of RE. Using a DC current source, it may be clear that the vBE of the transistor is fixed (assuming no external load) and then the difference between the input voltage of the CCC and its output voltage is merely a DC shift. Hence the small-signal value of vIN equals that of vOUT and herefore in this case the small-signal gain equals unity. In figure 5.6b, a linear equivalent circuit is shown for frequencies where 1/jωCin can be neglected. Figure 5.6c gives a more compact version of 5.6b. Here, the name “CCC” speaks for itself: the collector is part of the input and output circuit.

The input resistance Below, the input resistance of the CCC is derived using a driv- ing voltage source. Note that — because of Ohm’s law — we would get a similar expression if we would have used a driving current source. Working systematically, 110 CHAPTER 5. AMPLIFIER CIRCUITS we get:

vin rin = iin vbe iin = iRB1//RB2 + αfe/gm vin iRB1//RB2 = RB1//RB2 gm · vbe vbe = vin − RE · gm · vbe + αfe

The equations above are sufficient to calculate everything within this circuit. Since the 3 expression for vbe depends on vbe, we must separate the variables , yielding:

vin vbe = 1+RE · gm + RE · gm/αfe vin vin iin = + RB1//RB2 αfe/gm · (1 + RE · gm + RE · gm/αfe) −1 1 1 rin = + RB1//RB2 αfe/gm · (1 + RE · gm + RE · gm/αfe)

After simplification of this relation, we may get (5.5). Note that simplification does not change the relation; it merely changes its form, appearance and its readibility. The above derivation would have been shorter using a driving current source. αfe Rin = RB1//RB2// +(1+αfe)RE (5.5) gm

It follows from this relation that the input resistance of the circuit is more or less equal to αfe · RE, parallel to the resistance of the input (bias) circuit: the input resistance of a CCC is usually relatively high.

The small signal voltage gain of the circuit equals the ratio between the small signal output voltage and the (driving) input voltage:

v H = out vin vout = RE · ie

ie = ib + gm · vbe vbe ib = αfe/gm vbe = vin − vout

3Alternatively, we would have to solve a recursive relation which may take quite a long time to calculate... 5.1. INTRODUCTION 111

vin − vout ie = + gm · (vin − vout) αfe/gm gm gm vout = vin · RE · + gm − vout · RE · + gm αfe αfe · · 1 gm RE α +1 H = fe 1 1+gm · RE · +1 αfe

From the relation above, it follows that the small signal voltage gain is smaller than or equal to 1, when both the transconductance gm and the resistance RE reach high values. Note that if we use a current source instead of RE, the signal transfer (with- out load impedance) will be exactly equal to 1. This circuit is called an “emitter- follower”4, since the output and input voltages have an equal phase and (almost) equal amplitude.

The output resistance In the derivation of the output resistance shown below, it is (arbitrarily) assumed that the output port is driven by a voltage source. In the deriva- tion, all other independent sources are then set to zero (i.e. vin =0) yielding:

vout rout = iout iout = −ib + iRE − gm · vbe vout ib = − αfe/gm vout iRE = RE vbe = −vout gm 1 iout = vout · ( + gm + ) αfe RE αfe 1 rout = RE// // (5.6) gm gm

The dominant factor in the expression for rout is the transconductance of the transistor 1/gm. For instance, a bias current of about 1 mA will give an output resistance of less than 25 Ω, which is relatively low.

5.1.4 The common-drain circuit, CDC

The MOS equivalent of the CCC, the CDC, is given in figure 5.7a. The analysis is analogous to that of the CCC.

The input resistance From figure 5.7c we see that, since the gate current is 0, the input resistance is given by the parallel link of RG1 and RG2, giving rin = RG1//RG2. There is no other parameter present in this relation, since the gate acts as an insulator between the input and output.

4A more suitable name would be a “base-follower”, but its not my call. 112 CHAPTER 5. AMPLIFIER CIRCUITS

The transfer The transfer of input to output can be calculated in the same way as · for the CCC, resulting in: vout = gm RS . Note that here the phase between input vin 1+gm·RS and output is also 0◦; the voltage transfer, however, is always a bit smaller than 1 for finite values of RS. This circuit is called the “source-follower”.

RG1 RG1 G D gs .v C1 m +) g S v +) v in v OUT -) OUT R R R G2 S RG2 S -) vin

a) b)

iin G S +) +)

gs

R .v vin S m vOUT R //R g -) G2 G1 -) D c)

Figure 5.7: Common-drain circuit

The output resistance can easily be obtained from the small-signal equivalent cir- cuit by forcing e.g. a voltage at the output node for vg =0: 1 Rout = RS// (5.7) gm 5.1. INTRODUCTION 113

5.1.5 CEC, CBC, CCC, CSC, CGC and CDC: a comparison All three variants of the single-BJT amplifier circuits have different properties. Table 5.1 gives a comparative summary of the results of the three circuits with respect to signal transfer, input and output resistances.

circuit type CEC CBC CCC transconductance gm for given IC (q/kT) · IC (q/kT) · IC (q/kT) · IC ∼ ∼ ∼ input resistance = αfe/gm = 1/gm = αfe · RE (average) (low) (high) ∼ ∼ ∼ signal transfer = −gm · RC = gm · RC = 1 ∼ ∼ ∼ output resistance = RC = RC = 1/gm (average) (average) (low)

Table 5.1: Comparison between CEC, CBC and CCC

Table 5.2 gives a comparative summary of the results of the three MOS circuits with respect to transfer, input and output resistances.

circuit type CSC CGC CDC transconductance gm for ID = IC gmMOS < gmMOS < gmMOS < gmBJT gmBJT gmBJT ∼ input resistance very high = 1/gm very high ∼ ∼ ∼ signal transfer = −gm · RD = gm · RD = 1 ∼ ∼ ∼ output resistance = RD = RD = 1/gm (average) (average) (low)

Table 5.2: Comparison between CSC, CGC and CDC 114 CHAPTER 5. AMPLIFIER CIRCUITS

5.2 Cascade of multiple amplifiers

The circuits discussed so far, always contain just one transistor. Often, we can’t satisfy all (often contradictory) requirements (like input/output impedance, voltage amplifi- cation and bandwidth) with circuits containing only one transistor. This is a direct consequence of the limited number of degrees of freedom in circuits with just one transistor. If we must satisfy multiple requirements — contradicting for one amplifier — we clearly must create more degrees of freedom by expanding the circuit into a multistage amplifier.

Signal transfer / signal coupling The next figure shows a linear model of a two-stage amplifier. The amplifier stages are reduced to their bare essence: only the most important parameters are present. These are the input impedance, output impedance and voltage gain5. Ccouple Cin Cout Rg Rout1 Rout2 + + + + + + + . A .v A1in1v 2in2 v v v v R g vin1 Rin1 out1 in2 Rin2 out2 load ------first stage second stage Linear model of a two-stage amplifier Single-stage amplifiers can easily be coupled using capacitors. Only if the DC level of the output of one stage equals the DC input level of the proceeding stage, then direct coupling can be used. Using a coupling capacitor Cin between the signal source with output impedance Rg and the first stage with input resistance Rin, the (-3dB) bandwidth is directly limited: v Z jωC R H(jω)=in = Rin = in in vg ZRin + ZCin + ZRg 1+jωCin(Rin + Rg) Clearly this circuit has a high-pass characteristic: the voltage gain is 0 for ω =0, while it has a known and constant value for ω →∞. Characteristic parameters for a high pass filter are the “high frequency” signal transfer and the cutoff frequency: R H(∞)= in Rin + Rg 1 f−3dB = 2πCin(Rin + Rg ) A similar process takes place while coupling the two stages, and when connecting the load at the output of the second stage. In all cases, the output impedance of the preceeding stage and the input impedance of the driven stage are of importance for the maximal signal transfer, while every coupling capacitor introduces a cutoff frequency. If we want to limit voltage attenuation Rout,stage << Rin,next stage.

For optimal voltage transfer, output impedances must be low and input impedances must be high. similarly

For optimal current transfer, output impedances must be high and input impedances must be low. 5.2. CASCADE OF MULTIPLE AMPLIFIERS 115

5.2.1 Voltage source The main characteristic of a voltage source is its low output resistance. The emitter-follower (CCC) and source-follower (CDC) are the closest resemblers: they have a relatively low rout. If we drive these circuits with a voltage source with internal resistance RG, then this output resistance is    αfe α 1 = fe + gm · rout,CCC RE // Rg // αfe gm + Rg gm gm 1 rout,CDC = RS // (if RS is finite) gm

For the MOST, the output resistance is approximately 1/gm. If for the bipolar version Rg is sufficiently smaller than αfe/gm, then the output resistance is also approximately equal to 1/gm.

RB1 RG1

RG C1 RG C1

vOUT vOUT RB2 RE RG2 RS vin vin

Voltage source with a finite output resistance a) with an NPN b) with an NMOS transistor /noindent It follows from this relation that reducing this output resistance can be done by in- creasing the transistor’s transconductance. Hence, a method for reducing the output resistance, is to increase the transistor’s bias current. If this is not acceptable, the only option left is adding more design degrees of freedom and use these: this boils down to increasing the circuit complex- ity. Using the definition gm = iout/vin, another way to increase gm is to replace the single transistor by a voltage amplifier + transistor which yields gm,overall = Av · gm. The result- ing circuit — with output resistance Rout =1/(Av · gm) — is shown below. A proper analog lab power supply uses this principle, without coupling capacitors since it has to be a DC source. C

B RB1

E RG C1 + A - C'

B' + vOUT A - RB2 RE vin

E' ∼ 1 Improved voltage source with rout = RE // A·gm

5.2.2 Current source An ideal current source has an infinite output resistance. A good starting point for the design of a real current source would be a basic circuit that already has a relatively high output resistance: a CEC or CSC. In calculating the output resistance of a CEC or CSC, we assume that the bias point is set by a (source or emitter) degeneration resistance. Furthermore, since the output resistance is of importance, we must take the output resistance of the transistor, μ/gm into account.

5In here stages with voltage output are assumed. However stages with current output yield very similar results, only the ratio between input and output impedance is completely opposite due to using current subdivision instead of voltage subdivision. 116 CHAPTER 5. AMPLIFIER CIRCUITS

ICc+i IDd+i vcc vdd

R VCC R VDD VBE E VGS S

a) b)

B/G C/D i

a g .v m g m be m . gm vcc gm vgs vdd E/S

RE

c) Current source a) with BJT b) with MOS c) SSEC of both The bias current for both circuits in the figure is determined by the (impedance of the) bias source at the input, the degeneration resistor and the characteristics of the transistor. For a current source, this current has to be as constant as possible: the output resistance must be as large as possible. Using figure c) we get (for the BJT case):

vcc rout = ic vcc − vE ic = gm · vbe + μ/gm vbe = −vE αfe vE = ic · ( //RE ) gm μ αfe rout = +(μ +1)· ( //RE ) gm gm The output resistance of the circuit is the sum of the output resistance of the transistor and the multiplied degeneration resistor (for the BJT in parallel to αfe/gm).

5.2.3 Current mirror Another function we often need is a sort of copy-machine, needed for re-using or distributing signals or bias settings. For copying a voltage, we can use a voltage buffer block with a unity voltage gain6. Copying currents is more difficult, since currents only flow in closed loops. A solution is using a current source circuit as described in section 5.2.2. For multiple DC current sources, we might just take that current source circuit a number of times, see the left hand side in the figure below. As presented on the right hand side, we can also copy the gate voltage to the rightmost transistor, which “saves” us a bias circuit. As discussed earlier, the value of the degeneration resistor can be chosen, for instance 0Ωmay a convenient value. 5.2. CASCADE OF MULTIPLE AMPLIFIERS 117

I ID1 D2 ID1 ID2

VDD VDD

R R R R VGS S1 VGS S2 VGS S1 S2

a) b) Methods for generating multiple currents Noting that the current-voltage relation of a transistor is monotonous, a convenient method to create a (gate-source) voltage that causes a certain drain current is using its inverse function. It sounds compli- cated, but it isn’t. For example, for the BJT we have:

−1 iC = f(vBE) ⇔ vBE = f (iC )

which we can use to replicate a current by using

−1 iC,out = f(f (iC,in))

A number of the corresponding circuits are given below; they are called current mirrors.

iIN iIN

ID2 IC2

iIN vGS vBE

VDD VCC VCC VDD IC2 v RS1 RS2 RE1 RE2 BE iIN ID2

a) b) c) d) Some current mirror circuits Again, the value of the degeneration resistance can be chosen freely: they influence both the output re- sistance and the required voltage headroom. Obviously, we can also create current mirrors with PMOS transistors and PNPs. Even more so, any active element with an arbitrary monotone function between output current and input voltage will do for a current mirror. A number of characteristics of current mirrors are calculated below.

From a (correct) small-signal equivalent circuit of current mirror (c), neglecting the transistor’s output resistance:

i 2 H = c iin ic2 = gm2 · vbe2

vbe2 = vbe1 1 αfe1 αfe2 vbe1 = iin · ( // // ) gm1 gm1 gm2 1 αfe1 αfe2 H = s2 · ( // // ) gm1 gm1 gm2

6A wire does the same thing, but usually does not have a both high input resistance and a low output impedance. A voltage amplifier with unity gain has rin →∞and rout ≈ 0Ω. 118 CHAPTER 5. AMPLIFIER CIRCUITS

If both transistors are identical, thus with the same αfe and IC0, then the relation simplifies to 1 α α α H = s · ( // fe // fe )= fe gm gm gm αfe +2 For the large-signal current transfer:

i 2 H = C iIN · q vB E iC2 = IC0,2 · e kT

kT iC1 vBE = · ln( ) q IC0,1

iC1 iC2 iC1 = iIN − iB1 − iB2 = iIN − − αfe1 αfe2

IC0,2 I H = C0,1 1+ IC0,2 · 1 + 1 IC0,1 αfe2 αfe1 Yet again, this is an ugly bugger, though the large-signal current transfer is linear because a nonlinear relation is used in combination with its inverse. This equation can be simplified further if we take identical transistors: α H = fe αfe +2 If we look closely at the circuit, then we see that (for equal transistors) the collector currents are equal, and the input current not only consists of the collector current, but it has also two base currents. The input signal is now iC · (1 + 2/αfe) while the output signal is iC , which results in the previously derived relations. If we use unequal transistors, we get a non-unity current gain factor that can be very useful in some circuits. Chapter 6

Feedback

6.1 Introduction

In many (electronic) processes, we need a well-defined function. Such a function can be e.g. linear amplification, filtering, AD-conversion, modulation, ... Usually, such operations need a component with power gain, which — fundamentally — is nonlinear, non-ideal and is usually sensitive to almost everything. However, it is possible to create a well-defined circuit operation using non-ideal and nonlinear components, with just a few linear components. To do so, the (ini- tial) final result of an operation is compared with the ideal behavior, and adjusted accordingly. This principle is called “feedback”, and has also applications in many non-electronic situations, like

• driving at a constant speed: the difference between the measured and desired speed is used to accelerate and decelerate.

• tennis matches: having information about where the ball is and at which speed it propagates usually gives better results than playing blindfolded. On a higher level, adjusting the style of your game to the opponent usually helps.

• learning: if your mistakes are pointed out and someone helps you to obtain correct results, then you learn to understand the matter at hand at a much higher level.

operation result

idea

“new” idea test } comparison evaluation

Figure 6.1: The basics of feedback

119 120 CHAPTER 6. FEEDBACK

This principle of feedback can be represented blockwise, as in figure 6.1. In this chapter, feedback is used on a system level for electronic circuits, where the principle of figure 6.1 is specified a bit more, resulting in figure 6.2.

input signal operation comparison output signal

Figure 6.2: Primitive implementation of feedback for electronic systems

Example: if we have an amplifier with an adjustable voltage gain, which we want to set to exactly 30, we can use an oscilloscope or voltmeter to measure whether the voltage (amplitude) of the output is really 30 times higher than that of the input. If this is not the case, then the voltage gain setting can be increased or decreased, depending on the measured difference. Doing so, we are in fact performing feedback. The com- parison result between input and output signal is a measure for the quality: the larger the difference, the poorer the quality, which means that we have to do more to get it right.

Feedback can be categorized into negative feedback and positive feedback.For negative feedback, the taken measure aims at counteracting some cause. Conversely, feed forward feedback enforces the cause. Later in this chapter, some more specific definitions are given.

6.2 Negative feedback

For an ideal voltage amplifier, the output voltage is independent of the load impedance. However, any real voltage amplifier has a non-zero output impedance, resulting in a non-zero dependency of the amplifier’s output voltage on the load impedance. Figure 6.3 shows an idealized voltage amplifier with output resistance Rout, yielding:

vout A · Rload A H = = = R (6.1) vin Rout + Rload 1+ out Rload

Equation (6.1) shows that if Rload changes, so will the output voltage, which is usually an unwanted effect for a voltage amplifier. The root cause for this is the non-zero amplifier’s output resistance rout. Using feedback the effect of any load impedance on the output impedance will now be decreased. Note that this corresponds to — at least virtually — decreasing the amplifier+feedback’s output impedance level. 6.2. NEGATIVE FEEDBACK 121

Rg rout

+ + + . v vin r A vin vout g in Rload - - -

Figure 6.3: Idealized voltage amplifier with output and load resistances.

6.2.1 Full negative feedback: a first concept This section presents a first concept for a voltage amplifier system using the amplifier in figure 6.3 for which the effect of Rload on the signal transfer is as small as possible. Note that this corresponds to a system with a small output resistance rout = vout/iout. There are multiple ways to get the desired result:

• increasing the voltage gain of the amplifier when detecting a too low output voltage (due to e.g. Rload).

• using or creating an amplifier that has a negligibly small Rout compared to Rload.

• making the driving voltage vin for the amplifier dependent on both the signal source voltage vg and on the load-dependent output voltage vout in such a way that vin increases as vout decreases and vice versa.

The first two options are in contradiction with the assumption on using the same am- plifier as in figure 6.3. For the third method — using feedback from the output to the input — one extra “component” must be used: a substraction point1, leading to the systematic representation of figure 6.4. In figure 6.4 the actual input voltage of the amplifier vin equals the difference between vg and vout:

vin = vg − vout (6.2)

Rg rout

+

- + + + . v vin R A vin vout g in Rload - - -

Figure 6.4: Principle of negative feedback: v in = vg − vout

1 The circuit needed to compare ug and uout is for example a differential pair. This circuit is also used in most opamps that explicitly have a differential input. For now, an ideal subtraction point has an infinite input impedance and an output impedance of 0Ω. 122 CHAPTER 6. FEEDBACK

The signal transfer of this circuit can easily be determined; using the brute force method for the circuit in figure 6.4 we may get: v H = out vg Rload vout = A · (vg − vout) ·  Rout + Rload Rload Rload vout 1+A · = A · vg · Rout + Rload Rout + Rload Rload A + H = Rload Rout (6.3) 1+A Rload Rload+Rout Note that separation of variables is a very important step in simplifying the equations. If you do not perform this action, you will get a recursive solution (which goes on indefinitely), which gives the same result if you rewrite it into a geometrical series, its just a lot more work. Note that from (6.3) it follows that for R A · load >> 1 (6.4) Rload + Rout the signal transfer function H approaches to “only” 1, but becomes highly independent of variations in Rload. This insensitivity can also be shown using the output resistance  rout of this circuit. Calculating the output resistance as the quotient of open voltage and short circuit current (with a non-zero vg): open voltage r = out short − circuit current vout(vg)| →∞ = Rload i (v )| out g Rload=0Ω · A vg 1+A = vg Rout A R = out (6.5) 1+A Alternatively, calculating the output resistance by forcing an output voltage or current (with vg =0):    vout  r =  out,amplifier i out vg =0 vout Rout = ·(1+ ) = (6.6) vout A 1+A Rout From (6.5) and (6.6) we get an output resistance that is a factor (1 + A) lower than the original one. If the voltage gain A is large, then the output resistance will be very small. Furthermore, it follows from (6.3) that the overall voltage gain also decreased with this factor (1 + A). This exchange between gain and improvement of some amplifier property in feedback systems is fundamental. It also implies that it is quite nice to use amplifiers that have very high gain: other parameters can be optimized by sacrificing part of that gain. 6.2. NEGATIVE FEEDBACK 123

6.2.2 Partial negative feedback: a generalised concept Full negative feedback was introduced in §6.2.1. For full negative feedback, the entire output signal is subtracted from the actual input signal. However, we can also subtract just a part of the output signal. The latter is the subject of this section. Starting with the system in figure 6.5, assuming that the feedback is realised via an attenuating circuit without phase shift (for instance with a resistive divider or a capacitive divider, or whatever) which corresponds to 0 ≤ β ≤ 12.

Rg rout

+

- + + + . v vin R A vin vout g in Rload - - -

b

Figure 6.5: Negative feedback using an attenuator.

The signal transfer of the circuit of figure 6.5 can easily be derived using separation of variables: v H = out vg Rload vout = A · vin · Rout + Rload vin = vg − β · vout Rload vout = A · (vg − β · vout) · Rout+ Rload Rload Rload vout 1+A · β · = A · vg · Rout + Rload Rout + Rload Rload A · + H = Rout Rload (6.7) 1+A · β · Rload Rout+Rload It follows from (6.7) that the signal transfer of the circuit in figure 6.5 is insensitive to Rload variations in A, is insensitive for changes in R and Rout if Aβ >> 1 load Rload+Rout then v 1 H = out ≈ (6.8) vg β This means that the voltage transfer sensitivity to variations in gain and load impedance are negligible small. The big (big!) advantage is that the signal transfer is determined by the attenuation factor β. This attenuation factor β can be made exact and linear using passive, linear components.  Finding the output resistance of the circuit, Rout, is not very difficult. Using any  rout proper analysis method yields Rout 1+A·β . Note that the value of Rload is not present in the output resistance of the circuit, since Rload is external to the amplifier. 2Later on in this book, we will also consider feedback networks that have phase shift, which has many interesting applications. 124 CHAPTER 6. FEEDBACK

6.3 Negative feedback and amplifiers: some examples

Due to the quite sensitive — to everything, including temperature and processing spread — semiconductor devices required for circuits that contain components with power gain , obtaining well-defined proper amplifier characteristics (without feedback) is rather difficult. In this context, “well-defined” corresponds to characteristics with low sensitivity to variations in temperature, biasing conditions and more. Similarly “proper characteristics” correspond to things like a certain low or high output resis- tance or a large bandwidth. This section illustrates the effect of feedback on circuit properties, using a few examples.

6.3.1 Effect of negative feedback on bandwidth

An important parameter for every amplifier is the “-3 dB” frequency, or cutoff fre- quency. This bandwidth is usually determined by either an output load combined with the amplifier’s output resistance or by the (parasitic) load at some internal nodes in the amplifier, resulting in a frequency-dependent gain A(jω); both items are addressed below.

Amplifier-limited bandwidth Fundamentally, every internal node in an amplifier has resistive and capacitive loading. This is due to a few laws of physics and due to Maxwell with its nasty equations. As a result of this, e.g. the voltage gain of an amplifier cannot be frequency independent. Usually amplifiers are constructed in such a way that the frequency behavior mainly resembles first-order behavior3; this type of amplifier is usually called “dominantly first-order” in its frequency behavior, for which then

A0 A(jω) ≈ 1+jω/ω0

Using this circuit in a negative feedback configuration, as shown in figure 6.6, without Zload, yields the following signal transfer for the circuit including feedback:

A0 · 1 H = ω 0 1+A β 1+j ω0(1+A0β)

This relation shows that the -3dB frequency of the feedback system is increased with a factor (1 + Aβ) with respect to the open loop situation, while the gain at low fre- quencies is decreased with the same factor. As usual for feedback configuration with negative feedback, an improvement in some property is paid by a similar decrease in gain.

Load-limited bandwidth The load impedance of an amplifier can typically be modelled as the series connection of a resistive load and a capacitive load while the output impedance of the circuit

3Designing amplifiers in such a way that the have dominantly first-order behavior ensures quite easy usage of these amplifiers in feedback configurations. This will be analyzed in detail later in this chapter. 6.3. NEGATIVE FEEDBACK AND AMPLIFIERS: SOME EXAMPLES 125 behaves resistively4. Then using negative feedback around the amplifier yields the circuit schematic in figure 6.6:

R r g out Cload

+

- + + + . v A v vout vg in Rin in Rload - - -

b

Figure 6.6: Negative feedback: frequency-dependent amplifier

The voltage transfer of the circuit in figure 6.6 can be determined using, among others, the brute force method: v H = out vg Zload vout = · A · vin Zload + rout vin = vg − β · vout 1 Rload = Rload + (6.9) jωCload Now the hard part of the derivation is completed: only the (back)substitution must be done and possibly rewriting the result in a nice readable form may be required. Note that rewriting inherently does not change the equation, just its appearance or its usability.

Zload vout = · A · (vg − β · vout) Zload + rout Zload Zload vout · 1+Aβ = A · vg Zload + rout Zload + rout Zload A · + H = Zload rout 1+Aβ Zload Zload+rout This signal transfer function corresponds to that of an amplifier with its own first- order frequency behavior, using A(jω)=A · Zload . Using negative feedback, Zload+rout the voltage gain decreases with a factor (1+Aβ) which equals the increment factor for the bandwidth. Note that there is really no need to expand Zload unless an expression for e.g. the -3dB frequency must be derived, expressed in e.g. Cload and Rload.

4This is a fair assumption for most circuits operation at low frequencies. At higher frequencies the analysis can be generalized at the cost of complexity and clarity. 126 CHAPTER 6. FEEDBACK

6.3.2 Effect of negative feedback on interference and noise Interference is a common phenomenon due to coupling signals from external sources or system parts into your circuit. Noise is generated in every circuit that dissipates energy just due to plain physics. If interference or noise is present at the input of an amplifier, then there is nothing you can do to discriminate between the input signal and the noise or interference. However, if noise is generated “somewhere in the circuit”, then there is a solution. In the example below, we start with the two circuits in the figure below, where noise or interference is represented by the source vst. It is assumed that the voltage gain factor A3 of the leftmost amplifier in the feedback configuration can be varied — without varying the generated noise — to get overall the same signal gain. v st vst + + v + + + + A G - A1 A2 vOA vG - A3 2 vOB

b

a) b) The output voltage of the circuits is:

vOA = A1A2vg + A2vst · = A3 A2 · + A2 · vOB 1+ · vg 1+ · vst βA3 A2  βA3A2 A3 · A2 1 = vg + · vst 1+βA3 · A2 A3

If we adjust A3 in such a way that we get the same signal gain for the two circuits:

A3A2 = A1A2 1+βA3A2 then the situation with negative feedback results in   A1 vOB = A1A2 · vg + A2 · vst (6.10) A3 From the latter relation, we see that for an amplifier with negative feedback, the noise component is a factor A1/A3 smaller. This means that any noise generated or injected after the first gain stage (anywhere in the feedback loop) in a feedback amplifier system can be suppressed. It can also be derived that if the noise is injected or generated at the input stage — corresponding to A1 = A3 =1then noise cannot be decreased. Again, noise suppression is at the cost of voltage gain.

6.3.3 Effect of negative feedback on nonlinear distortion Any amplifier with power gain larger than unity contains devices with power gain; these devices are fundamentally nonlinear. This implies that when driving the circuit with e.g. a sine many harmonics result, and the main reason for that is that you can write a nonlinear function into a series expansion and every non-unity power in that series results in an output signal that is at some higher harmonic frequency of the original frequency. A general analysis of the effect of negative feedback on nonlinear distortion is rather complicated. However, regarding all the generated harmonics as noise (although correlated with the input signal, so it is actually something like strongly correlated noise) can be done. Then the analyses in $6.3.2 and its results are valid: nonlinear distortion generated in the loop can be suppressed at the expense of closed loop gain. 6.4. STABILITY 127

6.4 Stability

In the previous sections of this chapter, we have shown that negative feedback results in less sensitive — to everything — amplifiers. Negative feedback is actually coupling (a part of) the output signal, in antiphase, to the input. This antiphase feedback is usually obtained by subtracting an in-phase signal from the input signal; the subtracting action generates the required inversion. v ++w v v w v G -+A( ) O G A( ) O

-1 bw() bw()

Figure 6.7: General model for negative feedback

Up till now, we have assumed the amplifiers to be frequency independent. However, in reality, the parameters A and β may be frequency dependent, written as A(jω) and β(jω). Figure 6.7 represents the general schematic of negative feedback. The transfer is v A(jω) o = (6.11) vg 1+A(jω)β(jω) Due to their frequency dependency, the quantities A(jω) and β(jω) have phase shift. In this case there may be an angular frequency ω1 for which the phase shift of A(jω1) added to the phase shift of β(jω1) equals −π: arg A(jω1)β(jω1) = −π (6.12)

For this specific angular frequency the (probably intended) negative becomes actually positive feedback: the fed back signal experiences an inversion (due to the subtraction) and has a phase shift of −π, resulting in a net phase shift of 0. For positive feedback a system can be stable, oscillating or unstable; the actual type of behavior depends on the loop gain of the system, A(ω)β(ω) as will be explained in this section. Also the degree of stability, stability criteria and possible stability representations will be discussed. 128 CHAPTER 6. FEEDBACK

6.4.1 Rough classification of systems with feedback The loop gain A(jω)β(jω) of a system with feedback determines the behavior of that system. This loop gain partly determines (among others) the magnitude of the input signal of the amplifier:

vcontrol = vG − β · vOUT A = v − β · · v G 1+Aβ G v = G (6.13) 1+Aβ Negative feedback may turn into positive feedback if the phase shift of the loop gain Aβ is an odd integer times π. In this case arg(Aβ)=π ± 2nπ and hence Aβ < 0. Inspection of (6.13) shows that the denominator can become “0” or even negative for negative Aβ. If the denominator is “0” for a certain frequency ω0, then every signal at ω0 is amplified ∞ times. In that case, zero input signal may result in a non-zero output signal at ω0. The result is then a circuit that generates a sinusoidal signal, all by itself, on a frequency ω0 for which 1+A(ω0)β(ω0)=0. The analysis of these circuits is given in chapter 8. If (1 + Aβ) becomes negative for one frequency, then the system is unstable: it switches from one state to another depending on the input signal. The circuit be- havior then is very nonlinear, and hence for this region linear analysis methods and impedances cannot be used5. Switching circuits are useful in digital systems, in analog-to-digital converters and in relaxation oscillators. None of these is discussed in detail in this book. Concluding, we can identify 4 situations for systems with negative feedback:   A(jω)β(jω) = −1 → oscillation for ω = ω0, ω=ω0  harmonic oscillation at ω0, chapter 8  A(jω)β(jω) < −1 → switching behavior, ω=ω0  positive feedback, not discussed in detail  Re (A(jω)β(jω))  > 0 → stable behavior, negative feedback  ∀ω  Re (A(jω)β(jω))  < 0 → stable behavior, positive feedback ω=ω0 or switching behavior, not discussed or harmonic oscillation, chapter 8

5Impedances assume a linear (Fourier of Laplace) transformation from the element equations to an impedance; for instance from i = C∂vC /∂t to ZC =1/jωC. 6.4. STABILITY 129

6.4.2 Stability of systems with negative feedback In this chapter, we will only analyze stable systems with feedback; non-stable systems are addressed in chapter 8. As stated in §6.4.1, a system’s stability depends on the denominator of the signal transfer function of the system, (1 + Aβ), so it depends on Aβ. Note that this denominator implicitly assumes feedback to the inverting node of the subtractor. This loop gain factor Aβ and its behavior as a function of frequency determines the (degree of) stability of a system. An elaborate analysis of stability has been performed by Harry Nyquist in 1932. From this analysis, a number of interesting conclusions concerning stability have been drawn. It is not of importance to know the exact derivation of Nyquist’s stability criterion; only the relatively simple conclusions are important! To estimate the (degree of) stability of a system with feedback, Nyquist intro- duced a polar figure (also known as the Nyquist-plot) in which the complex loop gain A(jω)β(jω) is plotted as a function of frequency ω. Using some old mathematical work of Cauchy, Nyquist derived that:

If the contour of A(jω)β(jω) in the polar figure does not circle the point {−1, 0} clockwise, then the corresponding system (with feedback) is stable. If this point is encircled clockwise, then the system is unstable.

Note that this all-determining “-1” point in the complex plane is nothing more or less than the value at which the nominator in anything you’d derive for a system with feedback, (1 + Aβ), would be exactly zero: Aβ = −1.

6.4.3 Stable and unstable: now what? The stability of a system depends on the polar plot of A(jω)β(jω) and its encircling of {−1, 0}. It can be shown that stable systems can display unwanted behavior such as overshoot, ringing or undershoot, but that, after sufficient time, they will always be stable. Hence the name, I guess. A number of examples are given in §6.4.4. For an unstable system, the output voltage will continue to grow or fall, until it reaches a certain limit (saturation).

H(jw ) v + w v G - A(j ) O

g o

v v

bw(j ) frequency frequency

Figure 6.8: Negative feedback around an amplifier: linear and stable

As you know6, it is possible to analyze a linear system in the time domain (in all cases) with element equations like i = v/r, i = C∂v/∂t and in the frequency domain with

6See §0.5.8 130 CHAPTER 6. FEEDBACK element equations like ZR = R, ZC =1/jωC and ZL = jωL. In a linear system working on an input signal with (angular) frequency ω, all signals have exactly this (angular) frequency. Any linear transformation — including Laplace and Fourier that underly impedances — can be used. In a nonlinear system working on an input signal ω, there may be many signals with different frequency components.

• Stable linear systems can be analyzed in both the time and frequency domains.

• If a stable system’s output signal reaches some limit (e.g. the supply voltage) or if it is significantly nonlinear, then the system may not be analyzed in the frequency domain: the output spectrum is not equal to the input spectrum, see figure 6.9 and Fourier and Laplace and impedances cannot be used7.

• An unstable system cannot be analyzed in the frequency domain: the circuit will operate heavily nonlinearly and the linear transformations underlying everything in the frequency domain cannot be used. The only correct way to analyze unsta- ble circuits is doing analyses in the time domain.

H(jw ) v + v G - A O

g o

v v

b frequency frequency

Figure 6.9: Negative feedback around an amplifier: nonlinear

7If the circuit runs softly into some limit, softly clips or if the nonlinearity is sufficiently small, as an approximation the circuit may be modelled as linear. Then linear analysis methods and impedances can be used to analyze this modelled circuit. 6.4. STABILITY 131

6.4.4 Stability of systems with feedback: examples We have derived in §6.4.2 that the stability of a system depends on the trajectory of A(jω)β(jω) in the complex plane (the Nyquist plot) and whether the point {−1, 0} is encircled clockwise. This section gives a number of examples and analyses of stable and unstable systems using polar plots.

First-order (low-pass) systems are probably the most simple frequency-dependent systems. Assuming a frequency-independent β, and a first-order low-pass A(jω):

A(jω) H(jω)= 1+A(jω)β(jω) A0 · 1 = ω 0 1+A β 1+j ω0·(1+A0β) A0β A(jω)β(jω)= (6.14) 1+jω/ω0

From this it follows immediately that Aβ can never be equal to -1: the real part of Aβ is always larger than or equal to zero. Systems for which Aβ is first-order low-pass are hence unconditionally stable. Using a Nyquist plot, this can also be visualized. Creating a Nyquist plot can be done analytically only for first-order systems: for first-order systems the result is a half-circle. In general Nyquist plot must be made numerically8. Figure 6.10 shows two curves of a Nyquist plot, one for a first-order relation according to (6.14) with A0 =10and β =1. The inner curve corresponds to the same system, now with β =0.5. Both curves do not encircle or intersect the point -1, hence also the Nyquist plot shows that they are stable.

))

w

(

b

)

w

(

A

im(

re(A(w)b(w)) -1 w=0 w= OO

b(w)=0.5

b(w)=1 w

Figure 6.10: Nyquist plot of a first-order system.

8This is one of the few situations where numeric answers give more insight than analytical ones, but only if the numeric values are represented graphically. 132 CHAPTER 6. FEEDBACK

It is not possible to get the signal transfer function of a system from its A(jω)β(jω): you can only see the (degree of) stability. For example, below we have a first-order A(jω)β(jω), where the first-order characteristic is either in A(jω) or β(jω):

A0β0 A(jω)β(jω)= 1+jω/ω0 A0 → A0 · 1 1: A(jω)= H(jω)= ω 0 0 1+jω/ω 1+A β 1+j ω0·(1+A0β) ω β0 A0 1+j → · ω0 2: β(jω)= H(jω)= ω 1+jω/ω0 1+A0β 1+j ω0·(1+A0β)

The first transfer is clearly a first-order low-pass, while the second is a combination of high-pass and low-pass characteristics, for the same A(jω)β(jω).

Second-order (low-pass) systems are a bit more complex than first-order systems. For a second-order low-pass A(jω) with a real feedback factor β:

A0β A(jω)β(jω)= ω ω 2 (6.15) 1+j Qω0 +(j ω0 ) Deriving a readable expression for the shape of the Nyquist plot, and plotting it prob- ably required quite some work. As alternative, a few points on the Nyquist curve can readily be calculated; using a few (well-chosen) ωs simplifies calculations. The obvious choices for ω that allow for fast calculations are:

• ω =0

• ω = ω0

• ω →∞ from which we have a phase shift and magnitude for A(jω)β(jω) of ◦ • 0 respectively |Aβ| = A0β for ω =0

◦ • -90 respectively |Aβ| = Q · A0β for ω = ω0

• -180◦ respectively |Aβ| =0for ω →∞.

It can be concluded that a second-order system cannot encircle the (-1,0) point. Not clockwise nor anticlockwise. It cannot. Hence a second-order system is stable. However it can also be derived that the Nyquist plot can be very close to the (-1,0) point in the Nyquist plot. This means that then the denominator (1 + Aβ) used for about everything in feedback systems becomes very small. For example the signal transfer function can show significant peaking, or a desired low output impedance can be very high around the corresponding frequency. 6.4. STABILITY 133

An illustration is given in the next two figures. Three Nyquist curves are shown in the left-hand plot of figure 6.11, for three different values for Q with A(jω)β(jω) according to (6.15). The right-hand plot of figure 6.11 zoom in on the area around the –1 point: none of the polar plots encircles or crosses the –1 point although there is just a small clearance between the curve and the –1 point.

)) ))

w w

( (

b b

) )

w w

im(A( im(A(

re(A(w)b(w)) re(A(w)b(w)) -1 -1

Q=0.5

Q=1 w Q=2

Figure 6.11: Second-order systems with real feedback: always (just) stable

The stability of a system can be seen very clearly in a Nyquist plot (further on we also introduce the degree of stability), but the signal transfer of the system cannot be obtained from this plot. The signal transfer H(jω) corresponding to the A(ω)β(ω) curves in figure 6.11 and assuming a real valued β are given in figure 6.12.

0 =2 Q Q=2 Q=0.5

))

w 0 Q=0.5

w

arg(H(j

|H(j )| [dB]

-20

-40 -p

ww

Figure 6.12: Bode plots corresponding to the Nyquist plots of figure 6.11 134 CHAPTER 6. FEEDBACK

Higher-order (low-pass) systems are again a bit more complex than second-order systems, hence they can have more phase shift. In general, an nth-order low-pass system can have a phase shift of n ·−90◦. From this it follows that all systems of third or higher order might have sufficient phase shift to encircle the –1 point in the Nyquist plot clockwise: they may get unstable9.

))

w

(

b

)

w

im(A(

re(A(w)b(w))

-1 “small’’ b

w

“large” b

Figure 6.13: Nyquist plots for two third-order (low-pass) systems: can be stable, but can also be unstable.

In figure 6.13, the Nyquist plots of two third-order low-pass systems are shown. Both have a phase shift of n ·−90◦ = −270◦. For one system A(jω)β>1 for the ω at which the phase shift equals −180◦: this corresponds to an unstable system. The other has a much smaller loop gain (for instance due to a smaller β) and there the corresponding Nyquist plot does not encircle the –1 point: this system is stable.

9The stability of systems follows from Aβ. Without feedback β =0and hence then the –1 point th cannot be encircled. E.g. a 100 -order A(jω) with a huge DC gain A0 and with β =0is stable: the loop gain Aβ is 0. In other words: there is no loop, hence there is nothing to become unstable. This is a convenient characteristic of any feedforward system: they are unconditionally stable; something which we cannot say about feedback systems. Well, we could, but it would be incorrect. 6.4. STABILITY 135

6.4.5 Phase and gain margin The Nyquist plots show whether a feedback system is stable or unstable. As you probably already figured out: this is a yes/no answer; the system is stable or not. In this context, an unstable system has a divergent output signal for any non-zero input signal; a stable system has a bounded output signal for a bounded input signal. For normal systems, the stability requirement is much more strict than “ just stable”: usually sufficient flat frequency response or sufficiently fast settling behavior after the occurrence of an input signal step is required. This can all be nicely described by stability margins:

The stability margin corresponds to the distance between the Aβ curve and Ð1

The definition above can be redefined in two ways, since the loop gain Aβ is a complex curve (i.e. a curve in the complex plane).

• One stability margin corresponds to the distance between |Aβ| and |−1| for arg (Aβ)=−180◦

• The other stability margin is the angular distance between arg (Aβ) and arg(−1) for |Aβ| = |−1|.

These two definitions are denoted as gain margin and phase margin. If both margins are small, then there is only little room for error before the system becomes unstable; such a system is usually called marginally stable.

Phase margin is defined for |A(jω)β(jω)| =1and represents the difference between arg [A(jω)β(jω)] and −π. The phase margin is positive if the system is stable. Gain margin is defined for arg [A(jω)β(jω)] = −π and represents the difference between 1 and |A(jω)β(jω)|. If the gain margin is larger than 1 or positive in [dB], then the system is stable.

)) ))

w w

( (

b b

) )

w w

im(A( im(A( re(A(w)b(w)) re(A(w)b(w)) -1 -1

phase m )) (w )b w mm argin w )b(w )| |A( mm arg(A( w w

Figure 6.14: phase margin and gain margin 136 CHAPTER 6. FEEDBACK

The degree of stability and both of the introduced measures of stability may not mean much to you right now, so below some examples are presented of stability margins and their impact for a few stable feedback systems. We do this by giving the step response and the polar figure of the systems. Since we have nothing else to do anyway, I will also show some Bode plots. This example uses 2 systems: one is a first-order and the second system is second- order. As derived earlier, both systems are fundamentally stable, although the Aβ curve of the second-order system may get very close to the –1 point. The systems have an open loop gain of:

A0 A1(jω)= ω 1+j ω0 A0 A2(jω)= ω ω 2 1+j Q·ω0 +(j ω0 ) For the first-order system we have the following closed loop gain (with a passive, non-phase-shifting, feedback circuit β):

A1(jω) A0 H1(jω)= = ω 1 · 0 1+A (jω) β 1+A β + j ω0 A0 · 1 = ω 1+A0β 1+j ω0·(1+A0β) From this equation, we see that for negative feedback, the DC gain becomes smaller, while the bandwidth increases:

A0 ω0,1 = ω0 · (1 + A0β) and A0,1 = 1+A0β For second-order systems we get something similar, also based on a passive feedback system without phase shift.

A2(jω) A0 H2(jω)= = ω ω 2 2 · 0 1+A (jω) β 1+A β + j Q·ω0 +(j ω0 ) A0 · 1 = ω ω 2 1+A0β 1+j +(j √ ) Q·ω0·(1+A0β) ω0· 1+A0β Here, the same goes for the case of negative feedback: the DC gain decreases and the bandwidth increases. Furthermore, the quality factor Q is now dependent on β: ω0,2 = ω0 · 1+A0β A0 A0,2 = 1+A0β Q = Q · 1+A0β

The Nyquist plot for both systems are presented in the figure below; the situation without feedback is presented on the left. Note that the leftmost plot of the Nyquist 6.4. STABILITY 137 curves is in the origin, since β =0. The center and right plot have increasingly larger βs. The plot on the right has a curve which nearly intersects the –1 point: there is only very little phase margin and gain margin available there. To estimate the phase margin, the unit circle about the original has been added. For both systems, the gain margin is ∞.

))

)) ))

w w w

( ( (

b b b

) ) )

w w w

( ( (

A A A

m( m( m(

i i i

re(A(w)b(w)) re(A(w)b(w)) re(A(w)b(w)) -1 -1

w 21o w

120o 96o 84o

left: Nyquist plot for the two systems without feedback. center: Nyquist plot for the two systems with a small negative feedback. right: Nyquist plot for the two systems with a larger negative feedback

The step responses for those three situations clearly show the effects of the phase margins; the step responses for the situations above are presented in the figure below. The stepsize has been kept constant, while the y-axes for the responses are different, in order to increase visibility of the response.

A0 111

st

A0 1 order out st v b 2order 1 order 1+A n d 0 d n 2 order

out

v st A0 1 order

d

n out 1+A b v 2 order 0

000000

ttt left: step responses for the two systems without feedback. center: step responses for the two systems with a small negative feedback. right: step responses for the two systems with a larger negative feedback

From the above graphs, we see that a smaller phase or gain margin results in more ringing within a response: the response is less stable. However, as long as the system as a whole is stable, then the ringing will be damped. If the point –1 is intersected, then the response is harmonically stable, which is used later in this book for creating harmonic oscillators. For systems which encircle the point –1 clockwise, the response will continue to grow over time, hence the system really is unstable. Calculations of step responses can best be done using Laplace transformation, which is however not treated in this book. In short, using Laplace, an nth-order trans- fer function is decomposed into a sum of n first-order transfer function. The poles of these first-order components directly give information on the stability. For i.e. a 138 CHAPTER 6. FEEDBACK second-order system:

H0 H(jω)= 1+j ω +(j ω )2 Qω0 ω0  · 1 1 = H0 ω + ω p1 + j p2 + j  ω0 ω0 2 1 p1 2 = ± − 1 , Q 4Q2 if the poles p1,2 are complex (conjugates), then the step response will be two exponen- −something·t tial functions with a complex argument, in the form of e ·sin(ωsomethingt). If the real part of the poles is negative, then we have a damped exponential function, if the real part is positive, then the poles cause an increasing exponential function and thus instability.

A Bode plot also shows the degree of stability of a system. Note that for unstable systems, we cannot make a Bode diagram, since the system is not operating linearly anymore! The Bode plots below represent the closed loop transfer of the second-order system with four different feedback factors; the first 3 are equal to the βs presented in the plots above, while the fourth has a much larger β. open loop 40 0

large b

))

w 0 large b

w open loop

arg(H(j

|H(j )| [dB]

-40

-80 -p

ww Bode plots of the used second-order systems with 4 different values for β

The upper left curve of the Bode plot corresponds to that of an open loop configuration. With increasing β, the DC gain decreases and the bandwidth increases. F The decreasing phase margin shows up as a peak in the signal transfer in the mag- nitude plot of the Bode plot. This corresponds to a higher quality factor Q. The peak in the signal transfer is due to the Aβ curve that closely passes the –1 point (in the Nyquist plot): at and around that frequency the closed loop gain may get very high. 6.4. STABILITY 139

6.4.6 Positive feedback: peaking If the loop gain Aβ of a system enters the unit circle around the –1-point in the Nyquist plot, then the closed loop gain may get larger than the open loop gain for only the corresponding frequencies. Below, some issues are explained a bit further. Figure 6.15 shows a Nyquist plot for a higher-order system. The left and right side is the same Nyquist plot, but with different annotations. Yes, it could fit in one plot, but that would not be very readable.

))

))

w

w

(

(

b

b

) w = w )

w m w

(

(

A

A

im( im( w= OO

re(A(w)b(w)) re(A(w)b(w)) 1+A(w ).b()w -1 m m

. .b()w A(w ) b()w1 1+A(w1 ) 1 1

w = w ww 1

Figure 6.15: Peaking for ω>ω1, maximum peaking for ω = ωm

The transfer function of a (stable) system with feedback is

A(jω) H(jω)= 1+A(jω)β(jω)

If the denominator of the expression above is smaller than 1, then the closed loop transfer is larger than that of the open loop transfer. This is called peaking and is due to positive feedback: a part of the output signal is fed back to the system’s input in such a way that the initial input signal is amplified. In a Nyquist plot, the vector A(jω)β(jω) is plotted; the vector 1+A(jω)β(jω) is simply the vector from the –1 point towards the curve. This vector is drawn in the right- side Nyquist plot of figure 6.15. From the figure, we clearly see that there is peaking for all ω within the unit circle: |1+A(jω)β(jω)| < 1 yielding |H(jω)| > |A(jω)|. If the Nyquist-curve approaches the –1 point even further, meaning that |1+ A(jω)β(jω)| is smaller, then more and more peaking results. The maximum peak- ing is obtained, in figure 6.15, for ω = ωm. A convenient characteristic of positive feedback and the associated peaking is that you get gain for free. On the downside, also noise, nonlinearity and spread are amplified in stead of suppressed. 140 CHAPTER 6. FEEDBACK

6.4.7 The Bode plot as tool for presentation The polar diagram usually gives a lot of insight in the behavior of feedback systems, but the interpretation of polar curves with frequency as implicit parameter is quite difficult. Furthermore, the dynamic range of a system (the difference between the smallest and largest value of the transfer) is so large, that important details are easily lost. An example is figure 6.11, where you have to zoom in quite far to really say something about the stability around the –1 point. This is the main reason for using a logarithmic Bode plot when designing or evaluating feedback systems. Using a Bode plot — instead of the Nyquist plot — to evaluate stability issues, the loop gain A(jω)β(jω) now must be plotted instead of the (closed loop) transfer function. In this section, we use an amplifier with a third-order low-pass characteristic as an example. The general transfer function of such a circuit is:

A0 A(jω)= 1+a · jω + b · (jω)2 + c · (jω)3 A0 = or (1 + jωτ1)(1 + jωτ2)(1 + jωτ3) A0 = τ0 2 (1 + jωτ1)(1 + jω Q +(jωτ0) ) The original transfer function is decomposed in three (real) first-order transfer func- tions in the second notation. The third notation is valid if the original transfer function cannot be decomposed into three real first-order transfer functions: then a decomposi- tion into one first and one second-order transfer function can be done. Both are plotted (magnitude plot only) in figure 6.16. In the modulus plot, the cutoff frequencies are marked by crosses; these cutoff frequencies are clearly recognizable in the Bode plot as the frequency where the phase shift is an integer times –45◦.

0 120

))

w 80

w

arg(H(j

|H(j )| [dB] 40

0 -1.5p

ww

Figure 6.16: Bode plot of two third-order (open loop) transfers

For the example 1 6 τ1 = 10 =0.1sH0 =10 1 1 τ2 = =0.031sτ3 = =0.001s 310 103 1 τ0 = 3100 =0.0031sQ=0.7 To determine the (degree of) stability of a feedback system, the loop gain must be evaluated. This loop gain may not enclose the –1 point clockwise within the Nyquist 6.4. STABILITY 141 plot; the distance — in angle or magnitude — between the curve and the –1 point then corresponds to the phase or gain margin. The Bode plot can also be used to determine phase and gain margin. To do so, the loop gain A(jω)β(jω) is plotted. We will clarify this by means of a few examples.

Stability with a real β For figure 6.17, a real feedback equal to 0.01 (-40dB) is applied to both systems in figure 6.16. The Bode plot of the loop gain is then simply 40 dB lower than the open loop curve and the phase characteristic is identical to that of A(jω). Note that this yields a closed loop signal transfer function that ideally equals H ≈ 1/β = 100.

0 80

w

(j )

w 40

wb

(j )

(j ) )

A

wb

(j ) | [dB]

A

arg( | 0

-40 -1.5p 10102 103 104 105 10 102 103 104 105 ww

Figure 6.17: Bode plot of A(jω)β(jω) with A(jω) as shown in figure 6.16 with β =0.01

The critical point for stability is –1 point. In Bode-terms, this –1 point corresponds to 0 dB magnitude and -180◦ phase shift. From the Bode plot above, we can directly observe that the loop gain is 0 dB for a phase shift equal to -195◦ respectively -230◦. From this it can be concluded that both systems are unstable for β =0.01: the phase shift exceeds -180◦ for |Aβ| =1≡ 0dB.

0 40

w

(j )

w 0

wb

(j )

(j ) )

A

wb

(j ) | [dB]

A

arg( | -40

-80 -1.5p 10102 103 104 105 10 102 103 104 105 ww

Figure 6.18: Bode plot f A(jω)β(jω) with A(jω) as shown in figure 6.16 with β =0, 0001

Figure 6.18 shows the same system, but now with a smaller feedback factor: β = 0.0001  −80dB, yielding ideally a signal gain factor equal to 80dB. It can reasily be observed in the figure that the phase shift of both systems are about −165◦ respectively −105◦ for |Aβ| =1: both systems are stable and have a phase margin of respectively 15◦ and 75◦. The signal gain for the critical phase shift of −180◦ can also be derived from the Bode plots: -15 dB and -25 dB. We can then readily obtain that the gain margins of the feedback system are 15 dB and 25 dB. 142 CHAPTER 6. FEEDBACK

6.5 Feedback and dominant first-order behavior

If we want to create a stable system with feedback, a sufficient phase margin and gain margin must be ensured. As shown previously, estimating these margings requires calculating A(jω) and β(jω) for every feedback system, and check whether it is suffi- ciently stable. Obviously, this poses a problem, mainly because time is scarce and we prefer playing games instead of calculating Bode plots of Nyquist plots. The designing process of a (stable) amplifier with feedback can be simplified using idiot-proof amplifiers. Idiot-proof amplifier are designed in such a way that they are stable with any real valued attenuating β. As derived in this chapter, a first-order sys- tem can not encircle the –1 point in a Nyquist plot: an amplifier with an arbitrary real valued β ≤ 110 and with first-order frequency behavior then is also unconditionally stable. Noting that the exact shape of Aβ for |Aβ| << 1 is not relevant either for stability, the amplifier does not need to be exactly first-order: it only needs to behave like a first- order system for A>1 to be stable with a real values β ≤ 1. This type of amplifiers is denoted as dominantly first-order amplifiers.

0 100 A(jw ) A(jw ) 80

w

(j ))

A

w

(j )| [dB] 40

arg(

A

|

0

-40 p 0 2 4 6 8 -1.5 0 6 10 10 10 10 10 10 102 104 10 108 ww

Figure 6.19: Amplifier with dominant first-order behavior: it behaves like a first-order system down to |A(jω)| =1, and looks (here) like at least a third-order after that.

Dominant first-order behavior is often used in amplifiers to get a simple feedback and a guaranteed stable system; it makes the amplifier idiot-proof. One method of creating such dominant first-order behavior is by placing a cutoff point at a very low frequency, causing all other cutoff frequencies to be at frequencies where |A(jω)| < 1. The other cutoff frequencies are usually due to parasitic capacities at some nodes in a circuit combined with the resistance at that node.

The low-frequency cutoff point of an amplifier with dominant first-order behavior is usually called the dominant cutoff point. Such amplifiers are stable, even with the most horrifying resistive feedback with respect to stability: β =1. Using β =1the closed loop gain would be unity, hence the name of these idiot-proof amplifiers: unity gain stable (UGS) amplifiers.

10The most widely spread feedback circuit that does not amplify and is not frequency dependent, is a resistive feedback circuit. 6.5. FEEDBACK AND DOMINANT FIRST-ORDER BEHAVIOR 143

6.5.1 Creating dominant first-order behavior Simple amplifiers are built from one amplifying stage. Usually, their signal transfer function has a second-order characteristic: there is a controlling source with an output impedance, controlling the amplifier, and there is an output circuit with a certain out- put and load impedance. Both combinations normally cause a low-pass or band-pass characteristic.

zg

+) rout v v r A.v g in in in cout cin -)

Figure 6.20: Simple amplifier circuit: usually second-order transfer

For simple (single transistor) amplifier circuits, we usually create feedback by means of source or emitter degeneration. This way, the loop is usually dominantly first-order. In general, the much more complex opamp-like amplifiers are used, since the pos- sibilities with simple (single transistor) amplifiers are limited. The opamp-like ampli- fiers consist of various cascaded amplifier stages. Usually one is dedicated to process differential input voltage into a current, another takes care of in a high voltage gain and yet another creates a low output impedance. This much more complex structure typically results in high gain with a higher-order transfer function: every stage adds at least 1 order.

zg

+) +) rout,1 rout,n v r A .v … v r A .v g v in 1 in,1 in,n in,n n in,n in c c in cout,1 in,n cout,n -) -)

Figure 6.21: General amplifier circuit: usually with an (n +1)th-order transfer

The open loop gain for the circuit of figure 6.21 is

vout A(jω)= = A1(jω) · A2(jω) · ... · An(jω) vin Ai(0) Ai(jω)= (1 + jω · (rout,i−1//rin,1)(cout,i−1 + cin,i)) Creating a dominant first-order of a multi-stage amplifier is simply nothing more or less than moving one of the cutoff points to a very low frequency. this is usually accomplished by adding a very large capacity to a high-impedant internal node. An example of its effect on the open loop gain A(jω) is shown in figure 6.22: By adding an extra capacitance, we decreased the lowest cutoff to a very low point, which causes a dominant first-order characteristic, at the cost of gain. In the Bode plot for this example, the second cutoff frequency is about a decade higher than the unity-gain frequency. This gives a phase margin of about 80◦. 144 CHAPTER 6. FEEDBACK

0 100 80 A(jw )

w

w w 40 A(j )

arg(A(j ))

|A(j )| [dB]

0

-40 p 0 2 4 6 8 -1.5 0 6 10 10 10 10 10 10 102 104 10 108 ww

Figure 6.22: Amplifier with dominant first-order transfer: it behaves like a first-order system up to |A(jω)| =1, and looks like at least a third-order after that. Chapter 7

The op-amp and negative feedback

7.1 Introduction

This chapter introduces the operational amplifier, or op-amp. This amplifier has two characteristics that we already used: it has a subtraction point and a high voltage gain. The term operational amplifier stems from the era where signal conditioning and op- erations on signals were done only in the analog domain: the 1940’s to 1960’s. Using op-amps many mathematical operations could be implemented using op-amps with proper feedback circuitry around them. Using op-amps, multipliers, adders, differ- entiators and more can easily be implemented. Drawbacks of op-amp based signal operations include noise, and spread issues (not addressed in this book), frequency dependencies and impedance related issues. Nowadays signal processing is preferably done in the digital domain, which is more power efficient at low frequencies, does not has impedance or spread related limitations and is quite easy to generate. Nowadays, op-amp-like configurations — a gain stage with feedback wrapped around it in some way — are still widely used in electronics at all places where digital signal processing cannot be used: in signal conditioning before analog-digital con- version, in RF circuitry, in A-D conversions and D-A conversions and more. In this chapter mainly simple applications of op-amps are discussed, along with the major non-idealities and their impact.

v v IN+ + + ®¥ RIN ROUT vOUT vOUT R ®¥ Rin A in,common A.(vv-) ®W IN+ IN- RUIT 0

vIN- v- -

Figure 7.1: The op-amp: a) abstract model b) symbol

The symbol of an op-amp is shown in Figure 7.1: it is essentially a voltage ampli- fier with a differential input voltage that generates an output voltage vout = A · (v+ −

145 146 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK v−). For an ideal op-amp, the voltage gain A is infinite, the input impedance is infinite and the output impedance is 0Ω. The circuitry inside the op-amp is not dealt with in this chapter: it essentially consists of a number of basic building blocks — similar to the ones in chapter 5 — that all together make the op-amp.

7.2 Linear applications

Op-amp based circuits are frequently used for analog signal processing applications. These mainly include linear processing such as current-to-voltage conversion, voltage gain, filtering, integration and more. In the following subsections a number of these applications are discussed in some detail. Extending it to other signal processing func- tions is quite straight-forward.

7.2.1 Non-inverting voltage amplifier

One of the basic configurations of an op-amp is given in figure 7.2. Negative feedback is wrapped around the op-amp, while the total circuit is controlled at the +-input. For clarity, the non-ideal input and output resistance and the voltage controlled voltage source that models the operation of the op-amp are presented. Below, a number of properties for this system are derived.

VDD

+) + rout rin vg A.v +) -) - in R1 v -VDD out

-) R2

Figure 7.2: Non-inverting amplifier

The voltage gain of the circuit above can be easily calculated using some simplifica- tions (idealisations): Rin →∞Ω and Rout → 0Ω. An example of the derivation of 7.2. LINEAR APPLICATIONS 147 the voltage gain is:

vout = A(v+ − v−)

v+ = vg R2 v− = · vout R1 + R2 R2 vout = A vg − · vout R1 + R2 A · v v = g out R2 · 1+ R1+R2 A

The relation for the voltage gain immediately follows:

v A out = R2 vg 1+ · A  R1+R2  vout  R1 + R2  = (7.1) vg A→∞ R2

The input resistance of the circuit can also be determined. Firstly this input resistance is derived explicitly assuming a finite value for rin,op−amp. This results in quite some typing (for me). The hardest part of this is — with the brute force approach — to neatly find all the simple relations iteratively and to keep track of what’s already been described:

vg rin = ig vg − v− ig = rin,op−amp R2//rin,op−amp R2//R1 v− = · vout + · vg R1 + R2//rin,op−amp R1//R2 + rin,op−amp vout = A · (vg − v−) A · R2//rin,op−amp + R2//R1 R1+R2//rin,op−amp R1//R2+rin,op−amp v− = vg · 1+A · R2//rin,op−amp R1+R2//rin,op−amp

Substituting all these relations gives the desired result. Note that the relation for rin below is rewritten a few times. This does not change the relation: they are identical and hence they all are just as correct as the other. The main purpose however is to get 148 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK a readable relation:

⎛ ⎞ R2//R1 + vg ⎝ 1 R1//R2 rin,op−amp ⎠ ig = · − r − R2//rin,op−amp R2//rin,op−amp in,op amp 1+A · + 1+A · + ⎛ R1 R2//rin,op⎞−amp R1 R2//rin,op−amp R2//rin,op−amp 1+A · + ⎝ R1 R2//rin,op−amp ⎠ r = r − · in in,op amp R2//R1 1 − + ⎛ R1//R2 rin,op−amp ⎞ 1+A · R2//rin,op−amp R1+R2//rin,op−amp − · ⎝ ⎠ = rin,op amp rin,op−amp R1//R2+r − in,op amp R2//rin,op−amp =(R1//R2 + rin,op−amp) · 1+A · R1 + R2//rin,op−amp

It looks like a lot of work, and it is. However, if we assume rin,op−amp to be much larger than R1 and R2, then the derivation become much more simple1:

vg rin = ig vg − v− ig = rin,op−amp ∼ R2 v− = · vout R1 + R2 vout = A · (vg − v−) A · R2 · v ∼ R1+R2 g v− = 1+ R2 · A R1+R2 ∼ R2 rin = rin,op−amp · 1+ · A R1 + R2

This relation clearly shows that the input impedance of the non-inverting amplifier configuration is quite high for large values of A. The limit, for A →∞the input impedance of this system is ∞Ω for any positive rin,op−amp: even for e.g. rin,op−amp = 1μΩ with A →∞you will get an infinite system-input resistance. The output resistance of the circuit is calculated in much the same way as described above. Assuming that the output port of the system is driven by a voltage source — with vg =0— and assuming an infinite input resistance but with a nonzero

1This probably has something to do with the “Modelling” chapter in this book. 7.2. LINEAR APPLICATIONS 149 rout,op−amp =0Ω :

vout rout = iout vout vout − A · (v+ − v−) iout = + R1 + R2 rout,op−amp v+ = vg =0 v− = βvout 1 1+A · β iout = vout · + R1 + R2 rout,op−amp rout,op−amp r =(R1 + R2)// out 1+A · β

In words: the output resistance of the system is the resistance of the β network at the output, in parallel to the output resistance of the op-amp, decreased by a factor (1 + Aβ). Usually this last term is dominant — the most low-ohmic — mainly due to the large Aβ.

7.2.2 Inverting voltage amplifier

A different basic circuit, if not the basic circuit, for an op-amp is shown in figure 7.3. This circuit has different properties than the non-inverting circuit of figure 7.2. The main difference between the two is that the circuit below has both the input signal and the feedback signal at the inverting input of the op-amp.

VDD

+ rout rin . +) - A vin

v -VDD out

+) -) R2 R1 vg

-)

Figure 7.3: Inverting amplifier configuration

The gain of the configuration of figure 7.3 can be easily obtained if we assume rin → 150 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

∞ Ω and rout =0Ω: v H = out vg vout = A · (v+ − v−) v+ =0 R1 R2 v− = vg · + vout · R1 + R2 R1 + R2 R1 −A · vg · + v = R1 R2 out · R2 1+A R1+R2 For the signal transfer:

−A · R1 H = R1 +(A +1)· R2 | −R1 H A→∞ = (7.2) R2

The circuit of figure 7.3 is called an inverting op-amp configuration, since it has a negative voltage gain. Other characteristics of the circuit are covered below; again it is assumed for simplicity that Rin →∞Ω and Rout → 0Ω.

The input resistance of this circuit can be calculated in various ways; one of those methods is: vg rin = ig vg − v− ig = R2 vout v− = − A R1 −A · vg · + v = R1 R2 out · R2 1+A R1+R2 · R1 vg R1+R2 v− = · R2 1+A R1+R2 R2 · − R1 vg 1+ R1+R2 A R1+R2 ig = · R2 1+ R2 · A  R1+R2  R2 · 1+ R1+R2 A rin = R2 · R2 · − R1 1+ R1+R2 A R1+R2

This last expression for rin is correct, but also quite ugly. There are (infinitely) many ways of writing this equation, where some representations are more “readable” than 7.2. LINEAR APPLICATIONS 151 others. A few examples are given below:   R2 · R1 − R1 1+ R1+R2 A + R1+R2 R1+R2 rin = R2 · 1+ R2 · A − R1 )  R1+R2 R1+R2  R1 R1+R2 = R2 · 1+ 1+ R2 · A − R1 R1+R2 R1+R2 R1 = R2 · 1+ R1 + R2 + R2 · A − R1 R1 = R2 + 1+A The latter form is very readable, and shows that for a large A, the input resistance almost equals R2.IfweletA →∞, then the equation simplifies to:

rin = R2 First obtaining the complete answer and subsequently substituting A →∞gives the correct answer, but it is much easier to assume A →∞a priori. In that case (for a something finite finite output voltage), the differential input voltage will be ∞ =0V. This simplifies the derivation to:

vg rin = ig vg − v− ig = R2 v− = v+ =0 vg ig = R2 rin = R2 A different but simple derivation can be performed by acknowledging that the input resistance of the circuit is equal to the sum of R2, and the input resistance as seen on the −-input of the op-amp. The output resistance of the inverting op-amp circuit can be calculated in many ways, all working towards Ohm’s Law applied to the output port of the system. Driving the output port with an independent signal source yields:

vout rout = iout vout vout − A · (v+ − v−) iout = + R1 + R2 rout,op−amp v+ =0

v− = βvout R2 β = , but we are not using this now R1 + R2 1 1+A · β iout = vout · + R1 + R2 rout,op−amp rout,op−amp r =(R1 + R2)// out 1+A · β 152 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

It would be great if you’ve just experienced a d´ej`a vu, since this derivation is almost identical to that of the non-inverting amplifier, a few pages back. Here, we again “see” the resistance of the β circuit at the output, parallel to the output resistance of the op- amp, decreased by a factor (1 + Aβ). The output resistance is very low for a high Aβ or low rout,op−amp.

7.2.3 Virtual ground The inverting amplifier was covered in §7.2.2. For this circuit, the +-input of the amplifier was grounded, and the potential of the −-input was almost equal to 0 V. Because the potential at the −-input is almost at ground potential, though it is not actually grounded, we call it a virtual ground. We analyse a number of issues for the part of the inverting amplifier on the right hand side of R2, see figure 7.4.

VDD

+ rout rin . +) - A vin i in v -VDD out +)

vin -) R1 -)

Figure 7.4: Input impedance... virtual ground point...

The input impedance of the circuit in figure 7.4 is (with rout =0):

vin rin = iin vin = v− vout v− = − A vout = v− − iin · R1 v− R1 R1 v− = − + i · = i · A in A in A +1 R1 v = i · in in A +1 R1 r = in A +1 So, for a large A, this input resistance is very low. For the limit of A →∞, the input impedance is 0Ω. The interesting part of this virtual ground point is that the (total) input current “sees” a low impedance, while the entire current runs through an arbitrary impedance (here R1). This means that the circuit can also function as a current-to-voltage converter: the input current sees an ideal (low impedance) input resistance and the input current is converted to an output voltage using R1. 7.2. LINEAR APPLICATIONS 153

Miller’s theorem The phenomenon from the previous section can also be described using the Miller- effect2. Generalizing the resistance in the circuit of figure 7.4 to an impedance Z, then the voltage drop across this Z equals (1+A)·vin. Due to this, the input impedance due to the combination of the amplifier with voltage gain −A and the feedback impedance Z is: Z Z = in 1+A

Using a feedback resistor across a voltage amplifier with gain −A results in a low input resistance for the circuit in figure 7.4. Similarly, using a feedback capacitor results in a low input impedance, which corresponds to a high input capacitance. Note that when using a non-inverting amplifier it is also possible to create negative input resistances, negative input capacitances and more useful stuff.

2To J.M. Miller, who was the first to neatly describe this effect in 1919 [16]. 154 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

7.2.4 The integrator There is a variety of interesting frequency-dependent linear applications for the op- amp. One of the most simple applications is the integrator. For the configuration of figure 7.5, we assume the op-amp to be ideal, meaning that A →∞and rin →∞Ω and rout =0Ω. For this circuit, the output signal is Z1 vout = − · vin (7.3) Z2 − vout = vZ1 (iZ2 (vin)) (7.4) in the frequency domain and in time domain respectively.

+

A 8 - +)

vout

+) -) Z2 Z1 vin

-)

Figure 7.5: Basics for an integrator (or something else) The vin − vout relation of an integrator is something like vout = B · vindt.By equating this to (7.4), it follows that an integrator can be created by:

• having Z2 perform a voltage-to-current transformation and integrating this cur- rent to a voltage, using Z1. Hence, Z2 is a resistance and Z1 is a capacitor.

• having Z2 integrate the input voltage to a current, and using Z1 to convert this current back to a voltage. Now, Z2 is an inductance and Z1 is a resistance. These two generalizations are presented in figure 7.6; a possible derivation of the re- lation between the input and output voltage is given below for the integrator using a capacitor. Obviously, the derivation for the integrator circuit with an inductor is almost identical.

v (t)=v− − v (t) out C 1 t vC (t)=vC (0) + · i(τ)dτ C τ=0 v (t) i(t)= in R2 1 t vout(t)=−vC(0) − · vin(τ)dτ RC τ=0 Substitution of the impedances in (7.3) yields an expression for an integrator in fre- − 1 · quency domain: vout = jωRC vin. From this it can be concluded that the term 1/jω corresponds to integration. This is used throughout Laplace transformation, where a significant contribution is that jω is written as s. 7.2. LINEAR APPLICATIONS 155

++

8 AA8 --+) +)

vout vout

+) +) L -) 2 -) R2 C1 R1 vin vin

-) -)

Figure 7.6: Integrator realisations

7.2.5 The differentiator After the explanation, derivation, realization and obtaining some general knowledge of interesting facts considering the integrator, it would (hopefully) be no surprise that we can also create differentiators with an op-amp circuit. Just like in §7.2.4, we can · ∂vin create a relation vout = B ∂t with the circuit in figure 7.5 by:

• performing a differential voltage-to-current conversion using Z2 and using Z1 to transform this current to a voltage. In that case, Z2 must be a capacity and Z1 a resistance.

• converting the input voltage to a current using Z2 and taking the derivative of this current using Z1 and converting that to a voltage. In this case, Z2 is a resistance and Z1 an inductor.

The two situations are given in figure 7.7. A derivation of the large-signal transfer is given below. The derivation for the differentiator with an inductor is completely analogous.

vout(t)=v− − vR(t)

vR(t)=R1 · iC (t) ∂vin(t) i (t)=C2 · C ∂t ∂vin(t) v (t)=−R1C2 · out ∂t Again, substitution of the (frequency domain) impedances in (7.3) yields an expres- sion for an differentiator in frequency domain: vout = −jωRC · vin. From this it can be concluded that the term jω corresponds to differentiation; in Laplace trans- formation it is written as s. Note that both the circuit configurations and the transfer functions are their exact complement. 156 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

++

8 AA8 --+) +)

vout vout

+) +) -) -) C R R2 L1 v 2 1 in vin

-) -)

Figure 7.7: Differentiator realisations

7.2.6 Summation of currents Summing currents is fairly easy, according to Kirchoffs current law: the summed cur- rent flowing out of a node is equal to the sum of the currents flowing into that node. The only thing we need is a node that can drain the summed current: a zero-impedance node, and an output that gives some useful information about this summed current. In §7.2.3, an op-amp circuit that converts an input current to an output voltage was discussed. For this circuit, the input node is virtual ground: low ohmic. The circuit in figure 7.4 can be reused to create a circuit that sums currents by only applying multiple input current sources, see the figure below:

+

- +)

iin vout

i i in,1 iin,2 ... in,n -) R1

Figure 7.8: Current summing circuit with an ideal op-amp

− · It can easily be derived that the output voltage can be written as vout = R1 i=1..n iin,i. Subtracting currents is just as easy by reversing the direction of an input current source. Changing current directions can, for instance, be done with a current mirror if we are using a unipolar current. 7.2. LINEAR APPLICATIONS 157

7.2.7 Summation of voltages When we need to add two or more voltages, then we can only do this if the sources of these voltages are “floating”: if both nodes of the voltage sources can be at any voltage. In reality, this is hardly ever the case. Noting that summing current is easy using the circuit in figure 7.8, summing voltages can be done by first doing a V-I conversion and then using the current summator:

+

- +)

Rin1 Rin2 Rinn vout

+) +) +) -) R1 vin,1 vin,2 ... vin,n -) -) -)

Figure 7.9: Voltage summation with an ideal op-amp

The circuit in figure 7.9 consists of n linear voltage-to-current converters (resistors), a current summation point (virtual ground point created by an op-amp with feedback) and a current-to-voltage converter (resistor R1). The transfer function can easily be determined (again with an ideal op-amp), for example:

− · vout = iR1 R1 iR1 = iin,i i=1..n v i = in,i in,i R  in,i vin,i v = −R1 · out R i=1..n in,i If all input conversion resistors are equal, then the relation above simplifies to  R1 v = − · v out R in,i in i=1..n The transfer can also be calculated using a non-ideal op-amp, but then the calculations become somewhat more complex. 158 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

7.2.8 Subtraction of voltages As briefly discussed in §7.2.6, the transformation of a current summator to a current subtractor is fairly straightforward. Subtracting voltages from each other can be real- ized in a number of ways:

• If the voltage sources are floating (both poles are not grounded), then we can just swap the poles. However, real sources are almost never floating sources.

• Subtracting is the same if we first place a minus sign in front of each term. So, multiplying the voltages with -1 allows us to use the voltage summator (see §7.2.7). But this would be far too complex...

• We can construct an op-amp circuit that has a positive voltage gain (see §7.2.1) for one input signal, and negative (see §7.2.2) for the other input signal.

The latter method is generally used and results in the circuit in figure 7.10.

R2

R 1 -

+)

v1 -) + +)

+) vout R3 v2 R4 -) -)

Figure 7.10: Differential stage for amplification of (vin,1 − vin,2)

The signal transfer of this circuit can easily be determined by using the principle of superposition. By assuming (for simplicity) A →∞: | | vout = vout(v1) v2=0 + vout(v2) v1=0 | −R2 · vout(v1) v2=0 = v1 R1 | R4 · R1 + R2 · vout(v2) v1=0 = v2 R3 + R4 R1

If we want vout to be proportional to (v1 − v2), then we have to satisfy:

R3 R1 = (7.5) R4 R2 resulting in: R2 vout = − (v1 − v2) (7.6) R1

The input resistance of both inputs can again be calculated fairly easily by assuming A →∞. The input resistance “seen” by source v1 equals R1; the input resistance 7.2. LINEAR APPLICATIONS 159

“seen” from source v2 is equal to R3 + R4. We can make these input resistances equal by choosing a proper value for R3 and R4. Simple math then results in:

R1R2 R4 = (7.7) R1 + R2

2 R1 R3 = (7.8) R1 + R2

The output impedance of the circuit in figure 7.10 is of great importance if the circuit must drive something else; which is always the case. For an op-amp with feedback and A →∞, we can easily see that the output resistance is always Rout → 0Ω

7.2.9 Filters We need analog filters for many different applications3 . First-order filters can be con- structed very easily using op-amps: a cascade of a first-order RC-filter or a first-order RL-filter and a unity gain stage using an op-amp would do the job. The op-amp then takes care of a high ohmic load for the filter, while its low output impedance enables driving other circuitry without changing the filter characteristics.

Z1 + +) v Z2 in +) -) - vout

-) R4 R3

Figure 7.11: First-order filter with an op-amp: Z 1 and Z2 determine the filter characteristics, R3 and R4 determine the gain and output resistance, together with the op-amp.

Using an ideal op-amp, the transfer function of the circuit in figure 7.11 is:

R3 + R4 H(jω)= · v+ R3 Z2 v+ = Z1 + Z2 R3 + R4 Z2 H(jω)= · R3 Z1 + Z2

Using this principle, we can create a number of different filters. Usually, such filters have only one reactive element (C or L), resulting in a first-order filter. In general, the possibilities are:

3Digital filters can’t always be used due to the needed calculating capacity or just because the signal is not in a digital form (yet). 160 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

• Z1 = R1 and Z2 = C2 gives a first-order high-pass filter

• Z1 = C1 and Z2 = R2 gives a first-order high-pass filter

• Z1 = R1 and Z2 = L2 gives a first-order high-pass filter

• Z1 = L1 and Z2 = R2 gives a first-order high-pass filter

• Z1 = L1 and Z2 = C2 gives a second-order high-pass filter

• Z1 = C1 and Z2 = L2 gives a second-order high-pass filter

Second order filters can be implemented using one extra resistor and one extra reactive element, typically implementing frequency-dependent feedback around the op-amp. Higher order filters can be easily constructed using cascades of first order and second order filters. The work by Sallen and Key is well known for creating op-amp based filters.

7.3 Feedback with non-linear elements

Many linear applications of the op-amp are discussed in 7.2. The applications of op-amps, however, is not limited to linear matters; this section is about feedback using non-linear elements. Although this is not used in modern times, it does give insight in negative feedback circuits. For that reason below a logarithmic expander and compressor are discussed; other non-linear circuits using op-amps include multipliers and an “ideal” rectifier.

7.3.1 Logarithmic conversion A well known non-linear application of an op-amp is the logarithmic converter; this converter has an output signal proportional to the logarithm of the input signal:

vOUT ∝ log(vIN)

If we look at the transfer of a normal inverting amplifier used with an op-amp and two linear resistors, then we have a transfer Rfeedback vOUT = − vIN Rin

This transfer is the result of the V-I converter with Rin and the I-V conversion of the current generated by Rin using Rfeedback. In mathematical form, we have: = − ( ( )) vOUT vRfeedback iRin vIN From here, we conclude that we can create a logarithmic converter in (at least) 2 ways:

• by replacing the element Rfeedback with a resistive element where the voltage depends logarith- mically on the current; this yields vout ∝ log(vin/Rin)

• by replacing Rin with a resistive something, where i ∝ log(v); then vout ∝ Rfeedback · log(a · vin) The first resistive element can be realized with a diode-like element; the second element cannot be realized very easily. If we use the above principle, then we get the circuit in the figure below: 7.2. LINEAR APPLICATIONS 161

iD D1

R -

+)

v IN + +) -) vOUT -)

Logarithmic converter The transfer of this circuit has to be calculated using large-signal, due to the non-linearities. With an ideal op-amp and neglecting the factor “-1” in the diode equation, we get:

v = −v (i ) OUT D D   kT iD vD(iD)= · ln − 1 q I0 vIN iD = iR =  R  kT vIN vOUT = − · ln q R · I0

7.3.2 Exponential converters A strongly related non-linear application of op-amp circuits is the exponential converter; this type of circuit performs the complementary operation of the circuit in §7.3.1. Starting with an inverting amplifier configuration, where = − ( ( )) vOUT vRfeedback iRin vIN then we see that to create an exponential converter, (at least) two methods result:

• by replacing the resistance Rfeedback by a resistive element where the voltage depends exponen- a·vIN/Rin tially on the current; then we have vOUT ∝ e

• by replacing Rin with a resistive something where the current depends exponentially on the volt- a·vIN age; then we have vOUT ∝ Rfeedback · e Again, the diode-like element allows us to realise one of these possibilities, resulting in the next circuit:

iD R D1 -

+)

v IN + +) -) vOUT -)

Exponential converter The transfer function can now simply be derived if we assume the op-amp to be ideal (an infinite gain and other convenient issues): · q vIN vOUT = −R · ID0 · e kT (7.9) 162 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

7.4 Op-amp non-idealities

So far, we have always assumed op-amps to be ideal. As stated at the beginning of this chapter, the most important assumptions for an ideal op-amp are:

• A →∞ a bit more specific: the voltage gain is equal for all frequencies and is ∞ for differential signals. We have implicitly assumed the gain for common input signals to be 0 and the output voltage to be 0 for vin,diff =0.

• rout =0Ω a bit more specific: the output resistance is 0Ωfor every frequency and load.

• rin →∞Ω a bit more specific: the input resistance is always ∞ Ω, and we implicitly as- sumed there are no DC currents or voltages needed to control the op-amp.

In reality, we have not got any ideal op-amps, simply because they are built up from non-ideal electronic components. The most important non-ideal effects of op-amps will be discussed in this subchapter.

7.4.1 Frequency-dependent gain

An op-amp is actually built from transistors and passive components (resistors, ca- pacitors). The underlying circuits must always consist of multiple amplifier stages, all with their own bandwidth (limitation), since the op-amp is required to have a very high voltage gain with high input impedance and low output impedance. The op-amp has, almost by definition, a frequency-dependent transfer function. With the knowledge of chapter 6 it should be no big surprise that most op-amps are designed to be dominantly first-order (idiot-proof that is). The effect of frequency dependency of op-amps on any system property is easily derived by calculating the desired property for an unspecified voltage gain A and after deriving that, substituting the actual — frequency dependent — A(jω). For unity-gain-stable op-amps, that A0 are dominantly first order, this just boils down to substituting A(jω)=1+jωτ or something similar. To get readable results, sometimes the result must still be rewritten.

7.4.2 First-order behavior and slew rate

While analysing op-amps, we have so far assumed the op-amp to be capable of deliv- ering any output current level. However, this is not the case:

• the internal currents are restricted

• the output current is usually also restricted

Both effects and the impact of those effects will be discussed within this subchapter. 7.4. OP-AMP NON-IDEALITIES 163

Internal current limitation and load A typical excerpt of an op-amp is shown below. As deduced in chapter 6, op-amps are usually built to have a dominant first-order behavior by adding a large capacitor (or something which looks like that using the Miller-effect) at some internal high ohmic node. The exact operation of the entire input stage is not of interest right now; but it is important that the current flowing into or out of the capacitor usually is (lower and upper) limited.

R1 R2 -

Aother +

Figure 7.12: Maximal rise speed or slew rate internally limited

The presence of the capacitor causes:

• (the desired) dominant first-order behavior; this is small-signal behavior

• a restriction on the maximal slew rate of the capacitor voltage. This limitation is an (unwanted) large-signal effect     dvC  Imax   ≤ dt C

This internal slew rate limitation results in a limited slew rate of the op-amp’s output voltage. This limitation (slew rate, in short: SR) is:     dvOUT    ≡ SR dt max I = A · max other C  · · If the op-amp ideally would create an output signal vOUT = Vout sin(ω t), with a maximum slew rate of (ω · Vout), then we can directly see that the maximum output voltage for which the signal’s angular frequency ω would be undistorted is:

SR V  = out,max ω 164 CHAPTER 7. THE OP-AMP AND NEGATIVE FEEDBACK

Figure 7.13 illustrates this slewing effect. The smallest sine wave corresponds to the undistorted signal, without any slewing effect. For the larger (dotted) sine, the signal is distorted because the required slew rate to get an undistorted sine is larger than the slew-rate limitation imposed by the circuit. This results in a rather distorted signal.

slewing

no slewing

t

Figure 7.13: The slewing effect

For clarity, the slewing effect has only been shown in the rising side of the sinus. Op- amps can have a symmetric or asymmetric slewing, depending on the internal struc- ture.

Internal current restrictions and external load In the previous section, the slewing was due to the internal current limitation of an internal capacity. However, the output current of an op-amp is usually also limited, which can cause slewing due to the output current limitation combined with an external (load) capacity; the same story, the same effect, the same misery. Chapter 8

Positive feedback: oscillators

Feedback, and many of its stability aspects were discussed in chapter 6 while a number of stable systems with feedback were presented and were analyzed in chapter 7. In stable systems, the loop gain A(jω)β(jω) does not encircle the point “-1” in the complex plane in a clockwise direction. These stable systems are usually linear, except when explicitly using non-linear components or when running into some limit like clipping to the voltage supply. Frequency domain analyses can be applied to these linear(-like) systems: for these it is assumed that an excitation at ω0 results in signals throughout that system also to be at ω0. If the “-1” point in the Nyquist plot is encircled by the loop gain A(jω)β(jω), then that system is unstable: it may be in a meta-stable point or it goes or stays to either of the extreme states — possibly periodically changing its state. Usually these extreme states correspond to an output voltage equal to the positive or negative sup- ply voltage. Theoretically a system can stay in a meta stable point indefinitely and numerical solvers (simulators) are especially good in finding this meta stable point; in actual circuits any disturbance (noise, interference, birds flying by) will make the system leave the meta-stable point and go to either of the stable extreme states. These fundamentally unstable systems cannot be analyzed in frequency domain: they can only the analyzed in time domain. As interesting and as useful as they may be, we’re not going to analyze, introduce or show these unstable circuits. Exactly between the stable systems and the unstable systems are systems for which the loop gain exactly passes through the –1 point in the Nyquist plot. Systems with a loop gain for which the “-1” point is crossed will be shown to oscillate harmonically (sinusoidal). A number of them will be introduced and analyzed in this chapter.

165 166 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

Harmonic oscillators are systems for which the Nyquist plot crosses the “−1” point. The hows and whys for this are briefly introduced below. After that, a variety of harmonic oscillators are discussed. Although the basics are quite identical, the ac- tual circuit implementation of oscillators can be very different, all with their specific advantages and disadvantages. It may be clear that harmonic oscillators, for which Aβ = −1 for just one ω, must include a non-zero feedback β(jω). This feedback can either be from the output to the inverting input (whenever available) or to the non- inverting input of the system (is available). A general concept is given in figure 8.1: two frequency-dependent systems with a frequency-dependent feedback1.

b(w) b(w) 1 j 2 j vIN + + (w) (w) A1 j A2 j - -

vIN

a) b)

b(w) b(w) 1 j 2 j

+ + (w) (w) A1 j A2 j - -

a) b)

Figure 8.1: (a) Aβ to the “–”-input of the amplifier (b) Aβ =1fed to the “+”–input of the amplifier.

Harmonic oscillators are always created by using a suitable β-circuit which causes the Nyquist factor to be Aβ = −1, for one specific frequency ωosc. Originally the Nyquist factor is defined as Aβ, for a feedback to the inverting input. This is com- mon practice for amplifier circuits and other stable systems, but it is not necessary for oscillator circuits. The two circuits from figure 8.1 have a loop gain (for a frequency independent A>0) of:

v A1 out = vin 1+A1 · β1(jω) v −A2 out = vin 1 − A2 · β2(jω)

1We usually take a frequency-independent A, for simplicity. Then the amplification is either positive (non-inverting) or negative (inverting). We can easily visualize this if we use an op-amp, but if we were to use a transistor circuit, then it can sometimes be difficult to see whether or not the circuit is inverting. 167

• The denominator of the transfer function of the inverting configuration in figure 8.1a is zero for A1β1(jω)=−1. The transfer for that frequency goes to infinity: the circuit can create an output signal at that frequency from nothing. An output signal for one frequency is an harmonic signal (a sine): we have a harmonic oscillator, or alternatively a sine generator.

• The denominator of the transfer function of the inverting configuration in figure 8.1b is zero for A2β2(jω)=1. The absence of the “-” is due to the feedback to the “+” instead of the “-”-input of the amplifier. Again, we have a harmonic oscillator at the corresponding frequency.

If you do not like to think about whether or not the “-” must be included: just follow the total feedback loop, and write down every gain you encounter in that loop. For harmonic oscillators the transfer of this total loop equals 1 for ω0, which means that some signal at ω = ω0 remains that same signal in phase and magnitude after passing the loop once, twice, .... For the configurations in figure 8.1a and figure 8.1b we then get:

• for the system in figure 8.1a: the resulting ttoal loop gain is (starting arbitrarily at the output node β1(jω) ·−1 · A1(jω) with for harmonic oscillation β1(jω0) · −1 · A1(jω0) ≡ 1. This is clearly the same condition as described above.

• for the system in figure 8.1a: the resulting total loop gain is (starting arbitrarily at the output node β2(jω0) · +1 · A2(jω0) ≡ 1. Also this is the same condition as described above.

It is basically this simple: follow a complete loop — including “-” signs — and for harmonic oscillation the transfer of this loop is exactly 1 for only the oscillation fre- quency. hece , harmonic oscillators always boil down to the same concept: complete loop gain equals 1. There is however a large variety in circuit implementation, but at system level it is just the same:

harmonic oscillation requires a total loop gain equal to 1, meaning that the mod- ulus of the loop gain is 1, and the loop phase is 0 ±k · 2π rad. This is called the “oscillation condition”. b ( w) b ( w) 1 j 2 j

+ + ( w) ( w) A1 j A2 j - -

a) b) Harmonic oscillation: total loop gain ≡1, as a result an initial sine remains unchanged after one loop through the system. 168 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

8.1 Harmonic oscillators with a low Q

To make the Nyquist curve cross the “−1” point, we need a system and a feedback circuits. There are many variations possible, which all neatly oscillate at a specific frequency. Within this book, we categorize these oscillators based on the so-called quality factor Q. The quality factor Q of a system is a measure for the loss of energy during one oscillation period. The definition of the quality factor Q is: E Q ≡ ω total Pdissipation−period where the factor Etotal is the total energy within the oscillator and Pdissipation−period the dissipated power in one period. Hopefully, only a very few of you will be surprised that ω is the oscillation angular frequency. If this relation is rewritten to something more useful, then we have something which only depends on the total energy and the decrease in energy per oscillation period:

≡ Etotal Etotal Q ω −ΔE =2π (8.1) period |ΔEperiod| Tperiod A higher value of Q corresponds to less energy loss. We will cover low-Q oscillators within the current section, §8.1. Circuits that have a relatively large loss of energy per oscillation period: they take quite a bit of power. High-Q oscillators are covered in §8.2. 8.1. HARMONIC OSCILLATORS WITH A LOW Q 169

8.1.1 General introduction Harmonic oscillators with a low quality factor dissipate a relatively large amount of energy per oscillation period. Because of this dissipation, the circuit must, by defini- tion, contain dissipating components: resistors. The implementation of these harmonic oscillators, on a high level, is given in figure 8.2.

b ( w) b ( w) 2 j 3 j

+ + + ( w) ( w) ( w) A1 j A2 j A3 j - - -

b ( w) b ( w) 1 j 4 j a)b) c)

Figure 8.2: Possible implementations of a harmonic oscillator using one amplifier: a) and b) without special implementations of c).

If we assume the amplifier to be ideal, with a gain A (for simplicity), then the feedback circuit must ensure there is only one frequency for which the loop gain is exactly 1. Hence, the β circuit must have a frequency-dependent transfer function. For the three versions of figure 8.2 we find:

• feedback to the −-input as shown in figure 8.2a:

the total loop gain is −A1(jω)β1(jω) which equals 1 for only the oscillation frequency. Assuming a constant positive A1(jω) then at oscillation frequency ◦ ◦ ω0 the phase shift of the feedback network β1 equals 180 ± N · 360 .To accomplish this kind of phase shift at least a third order feedback circuit2.

• feedback to the +-input as shown in figure 8.2b:

the total loop gain is +A2(jω)β2(jω), which equals 1 for one frequency for ◦ ◦ harmonic oscillation. The phase shift of β2 for this frequency is 0 ± N · 360 . To create a frequency-dependent circuit with a phase shift of 0◦, we need a second or higher-order circuit.

• feedback to both inputs as shown in figure 8.2b:

The total loop gain is now just somewhat harder to get: it is (+β3(jω)−β4(jω))· A3(jω). For harmonic oscillation this total loop transfer equals 1 for only the oscillation frequency. Again, you need a second or higher-order circuit. This second-order circuit may of course be distributed over β3 and β4.

All these principles, concerning harmonic oscillators with a low Q, are found in many examples in literature or in any old stow away box with electronic junk. A few inter- esting harmonic oscillators with low Q will now be discussed.

2A second-order circuit can also shift a phase to 180◦, but the transfer would be 0. A feedback circuit with transfer 0 is required for harmonic oscillation with an infinite gain for the oscillation frequency. Obviously, this is physically impossible. 170 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

8.1.2 Wien bridge oscillator A well known implementation of a harmonic oscillator which uses the principle of figure 8.2b is the so-called Wien bridge oscillator. This harmonic oscillator contains a β circuit consisting of 2 resistors and 2 capacitors, which are combined such that the phase shift is 0◦ for the oscillation frequency. The original circuit is given in figure 8.3 and originates from bridge measurement equipment.

R1 C1

v v IN R2 C2 OUT

Figure 8.3: Wien circuit: the β circuit for a Wien bridge oscillator

Note that the input of the β circuit is connected to the output of the amplifier, while the output of the β network is connected to the input of the amplifier. The transfer of the Wien circuit of figure 8.3 is: v β(jω)=out vin (ZR2//ZC2) vout = vin · (ZR1 + ZC1)+(ZR2//ZC2) 1 R2· jωC2 1 R2+ jωC2 = vin · 1 R2· 1 jωC2 R1 + + 1 jωC1 R2+ jωC2 R2 1+ = v · jωR2C2 in 1 R2 1 R + jωC1 + 1+jωR2C2 R2 = v · in 1 1 · 2 2 2 R + jωC1 (1 + jωR C )+R R2 = v · in 1 R2C2 1 2 2 1 2 R + R + jωC1 + C1 + jωR R C 1 = v · in R1 C2 1 1 2 1+ R2 + C1 + jωR C + jωR2C1 1 β(jω)= R1 C2 1 1 2 1+ R2 + C1 + jωR C + jωR2C1

The oscillation condition states that harmonic oscillation occurs if Aβ(jω)=1. With a non-imaginary A, we must also have a real β circuit. From the transfer we see that:

• the numerator of β(jω) is real 8.1. HARMONIC OSCILLATORS WITH A LOW Q 171

• the denominator is complex, but can be real. If the denominator is sheer real, then the transfer will also be real.

The oscillation condition is Aβ(jω0) ≡ 1. Equating the loop transfer to 1 yields two expression/conditions because the loop gain is complex (with a real part and a complex part). The oscillation frequency follows from equating the total loop transfer to be real (with a zero complex part). The required factor A follows from equating the loop gain magnitude to 1. For the Wien bridge oscillator:

1 jω0R1C2 + =0 (zero imaginary part) (8.2) jω0R2C1 R1 C2 A =1+ + (unity loop gain magnitude) (8.3) R2 C1 √ The circuit will, according to (8.2), oscillate on ω0 =1/ R1R2C1C2 and requires the amplifier gain specified in (8.3). Figure 8.4 shows a few alternative circuits with the same transfer as the “Wien bridge circuit”. If we also used inductors, then we could create another three alternatives. R C C 1 2 1 R2 v v v v IN R2 OUT IN R1 C2 OUT C1

Figure 8.4: Alternatives for the Wien circuit 172 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

8.1.3 Phase-shift oscillators Harmonic oscillators that use the inverting amplifier configuration shown in figure 8.2a are usually so-called phase-shift oscillators. For these oscillators the total loop transfer is −Aβ(jω) which equals unity at oscillation. Therefore: • the β circuit must have a phase shift of −180◦ ± N · 360◦ at oscillation. This requires at least a third order β-network.

• the gain A at the oscillation frequency must be such that the magnitude of the total loop transfer equals unity, A =1/β(jω0) The β circuit is usually chosen as a third-order low-pass circuit. The simplest form of such a circuit is given in figure 8.5 and this circuit has a monotonically decreasing phase shift between 0◦ and −270◦. Add an inverting amplifier, and then we have one frequency for which the phase shift is exactly 0◦.

im(b )

w= oo R R R 1 2 3 w=0 |b(wo )| re (b ) vIN vOUT |b(w)| C1 C2 C3

Figure 8.5: RC-phase-shift circuit

The general transfer function of the circuit in figure 8.5 cannot be easily derived, since all components influence each other. The calculations are less cumbersome if we as- sume the RC-members not to influence each other (which is not true in general!). A requirement for this assumption is R3 >> R2 >> R1 and C3 << C2 << C1.Ifwe then choose R3C3 = R2C2 = R1C1 = RC, then the transfer function is:

vout 1 1 = 3 = 2 2 3 3 (8.4) vin (1 + jωRC) 1+3jωRC − 3ω RC − jω RC The transfer function is real if:

1+3jωRC − 3ω2RC2 − jω3RC3 = real 3 3 3jωRC = jω RC√ 3 ω = ω0 = RC

The transfer function is real for ω = ω0, which yields β(jω0)=−1/8.Tohavea stable oscillation we must give the amplifier a gain factor of -8 (inverting). Another way to get the (same) result is to note that if the 3 identical sections in the β network do not influence (or load) each other then each of them has a phase shift − ◦ − ◦ — at the oscillation√ frequency — equal to 180 /3= 60 . Working this out also 3 − yields ω0 = RC and A = 8. 8.1. HARMONIC OSCILLATORS WITH A LOW Q 173

A method to keep the RC-circuits in the phase-shifting circuit from influencing each other, is to use voltage buffers between the RC-branches. A neat way to implement this principle, is by dividing the amplifier in three pieces. The entire amplifier then consists out of three identical sections, which all consist out of a single amplifier√ and one RC-branch. To do so, the gain of every individual amplifier has to be 3 −8=−2.

R1 R2 R3 A AA

C1 C2 C3

Figure 8.6: A distributed RC-phase-shift oscillator

It probably seems quite a bit cumbersome to create such a distributed amplifier, just for calculations purposes. And yes, it would be insane to do so. However, you can create very simple amplifiers that implement both the RC and the gain required in the phase shift oscillators. The simplest implementation of such amplifier is the (digital) inverter, as found in digital electronics. The equivalent of the circuit in figure 8.6 is then 8.7a. This circuit is also known as a ring oscillator and is often used as a voltage controlled oscillator.

a)

b)

Figure 8.7: Alternative distributed RC-phase-shift oscillators a) the 3-stage ring inverter b) the N-stage ring inverter

For those who are wondering where the resistor and capacitor have gone: every circuit has some output resistance which can be abused. The same goes for the input and output capacitance of the inverter. What if you have an arbitrary number —N—ofstages? From the oscillation condition it follows that the phase shift of every section is equal to an odd multiple3 of π − − N . Noting that a first order RC-section can have phase shifts between π/2 0 and 0 − π/2 for low-pass and high pass circuits respectively, we can derive that:

3It might be useful to reason why even multiples would not satisfy the oscillation condition. 174 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

• N =3: There is only 1 solution for the circuit of figure 8.7 with N =3RC- sections. Every single RC-section has a phase shift of [0, −π/2], which gives a total phase shift of [0, −3π/2] for N =3. The only odd multiple of π within that range is then −π for which the phase shift per RC-section is −π/3.

• N =5: We have something similar: the total phase shift is [0, −5π/2], where the only odd multiple of π is −π.

• N =7: Now this are a bit different. We now have 7 sections, hence the total phase shift is [0, −7π/2], resulting in 2 odd multiples of π within that range: one on −π and one on −3π. However, the oscillator can still only oscillate on 1 frequency, that for which the total phase shift is −π. If you were to (take the challenge to) create a polar figure of the loop gain, then you would see the loop gain Aβ(jω) to spiral inwards about the origin. The point -1 can be intersected for both phase shifts but not at the same time (or not for the same A). If the Aβ crosses the -1 point or −3π phase shift, then the curve also encircles the -1 point clockwise, making it instable. Hence, only the solution for harmonic oscillation is the solution for which the loop transfer Aβ = −1 for the smallest A required to do so.

• N>7: Same story, different N.

Figure 8.8a shows the polar figures for the oscillator in figure 8.7 with 3, 5 and 7 sections. All versions are set so that they all have a phase shift of −π/N. All curves intersect the +1 point, but do not enclose it.

))

))

w

w

(

(

b

b

) ww= )

w = N3 w N5 N7=

im(A(

im(A(

N7= re(A(w)b(w)) re(A(w)b(w)) +1 +1 3 40000

(a) (b)

Figure 8.8: Polar figures of an RC-phase-shift oscillator as in 8.7 with 3, 5 and 7 sections. (a) per section −π/N phase shift (b) for N =7with a phase shift of −3π/N per section, scaled

Figure 8.8b shows the polar figure for the oscillator in figure 8.7 with 7 sections, where each section contributes −3π/N to the phase shift, for intersection with the point +1. By zooming out, we are unable to see what is going on around the +1 point, but we can see that the curve is circling the +1 point for lower frequencies, making the system unstable (bistable in fact). 8.1. HARMONIC OSCILLATORS WITH A LOW Q 175

8.1.4 Startup conditions The oscillation condition Aβ =1only gives us some information on the frequency of the oscillator. The condition condition does not state anything on how the system will start to oscillate nor on the oscillation amplitude. It only states that any signal at the oscillation frequency remains unchanged for ever at its initial amplitude. In other words, satisfying A(ω)β(ω)=1means that the transfer function of the system is infinite for a specific frequency, causing a harmonic signal at that frequency. The amplitude does not grow, decrease or change. From here, we can ask ourselves a few interesting questions:

• How is the oscillation frequency picked from the entire spectrum of possible frequencies?

• How does the amplitude arrive to a steady state?

Every circuit, in operation or not, has an infinitely wide spectrum of frequencies, due to the thermal agitation of the matter from which the circuit is built. These vibrations cause fluctuations in electric behavior of the components, resulting in an infinitely wide spectrum of electric vibrations, generally called “thermal noise” or “white noise”. The amplitude of the electric vibrations can, depending on the components and material, lie between 10−9 and 10−6 V. Concluding: the oscillator circuit must perform two tasks: it must “choose” a fre- quency from the spectrum and then amplify it to the “desired” value.

Frequency preference: If the gain of the oscillator circuit satisfies 1 A = (8.5) β(jωo) directly after switching on the source voltage, then the oscillation frequency has only been satisfied for ω = ωo, and not for any other frequency. However, there is no am- plitude other than that of the noise.

Amplitude: The initial very low frequency of oscillation — due to noise or something else — must be increased during startup of oscillation to get a sensible oscillation magnitude. This requires a loop transfer having a slightly larger gain for Aβjωo =1, something like A = A(1 + ε). In a polar plot this means that the system not encircles the -1 point clockwise by just a tiny bit, causing it to be fundamentally unstable. The instability is in that the magnitude will grow and continue to grow indefinitely, unless we tune it back at some stage to get exactly Aβjωo =1. Mathematically, a loop transfer a little larger than 1 will force the output voltage at frequency ωo to develop as: v0(ωo)=A · (1 + ε) 1+A · (1 + ε) · βmax + 2 · 2 · 2 A (1 + ε) βmax + 3 · 3 · 3 ··· A (1 + ε) βmax + + n · n · n · A (1 + ε) βmax V 176 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

A(vosc )

A(vosc ) start

A(vosc ) end

vosc

Figure 8.9: Non-constant amplification to startup an oscillator and to stabilize the magnitude at some specific value. and then clearly the amplitude will now grow indefinitely for ε>0. If there is nothing to stop this growing process, then the growth of the amplitude will continue until it e.g. “clips” at the source voltage. Upon clipping, the effective (average over the entire signal) loop transfer gain decreases to a level that corresponds to exactly Aβ =1. The output signal then is distorted. A better way to implement proper startup behavior and a nice output sine wave is using a better way of controlling the loop gain. One way is to actively control the gain of the amplifier based on the magnitude of the output sine, using a relation such as in figure . Due to e.g. clipping • the oscillation signal becomes somewhat distorted • the effective loop gain decreases slightly: the average amplification over the entire wave decreases. Hence, the problem is in the growth of the amplitude, which has to be present in the first place, but has to be decreased later on, to obtain the “desired” amplitude. Decreasing the amplitude growth is realized by having an amplitude-dependent gain: 4 we get a nonlinear amplification . Figure 8.9 shows a possible curve for A(vosc), which can result in the desired effect.

Creating a amplitude-dependent transfer function An amplitude-dependent gain can be created in a number of ways: • By having a negative feedback factor which is dependent on the amplitude; for example by implementing NTC resistors5 or PTC resistors6. An example is given in figure 8.10, where R4 can be a constant and R3 an NTC resistor or R4 a PTC resistor and R3 a constant. 4’Jamming’ of the output signal to the source voltage is strictly speaking also a non-linear gain, although quite crude. 5NTC is short for Negative Temperature Coefficient: a NTC resistor decreases in resistance for in- creasing temperature. 6PTC is short for Positive Temperature Coefficient: for increasing temperatures the resistance in- creases 8.1. HARMONIC OSCILLATORS WITH A LOW Q 177

Wien-network

+ -

R3 R3 A=1+ R R4 b 4 feedback

Figure 8.10: Amplitude-dependent gain by β = β(T ) with T = T (u osc)

• By means of disrectly changing the gain factor of an amplifier, for instance by making the bias current of the amplifier dependent on vosc, as shown in figure 8.11. We call such a construction an Automatic Gain Control (AGC) system.

Wien-network

+ amplitude A - detection

Figure 8.11: Automatic Gain Control 178 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

8.2 Harmonic oscillators with higher Q

§8.1 covered harmonic oscillators with a low quality factor. A low Q corresponds to a high loss of energy per oscillation period. An oscillator will (hopefully) oscillate at a constant amplitude, meaning that there has to be some active component to correct this loss of energy. For the oscillators in §8.1, this loss of energy was corrected by circuits like opamps or inverters. These circuits can be used to create reasonable low and average frequency oscillators. Creating low Q oscillators becomes increasingly difficult for higher frequencies (especially in the GHz domain). §8.2 covers oscillators with a high quality factor. These have a relatively low loss of energy per oscillation period, and should work with far less ideal amplifiers. The majority of §8.2 covers high Q oscillators with amplifiers consisting out of just one transistor7.

8.2.1 Single transistor oscillators

To create high Q oscillators, we must decrease the dissipated energy per period. So, every form of dissipation has to be minimized, meaning that we need to reduce the number of resistors. We are left with oscillators built from coils, capacitor, one tran- sistor and a bias circuit.

+ ( w) A1 j -

b ( w) 1 j

--

++

Figure 8.12: Harmonic oscillator with high Q, built from one transistor

The β circuit for high Q and high frequency oscillators is usually made from capacitors and inductors or other components which have loss loss by themselves, like crystals. We will use a single transistor, since op-amps are not suitable amplifiers for high fre- quency circuitry. Figure 8.12 shows the principle of two implementations, without source or emitter degeneration, and biasing resistively, all for simplicity reasons.

7Using 2 transistors would give you a far better amplifier, and you have some more degrees of free- dom. The theory in this section allows you to analyze and create more complex oscillators with a high Q. 8.2. HARMONIC OSCILLATORS WITH HIGHER Q 179

The small-signal equivalent circuit of the amplifier circuit in 8.12 is represented by

-

be

v

.

in

out

m

r

r

g +

Figure 8.13: Small-signal equivalent circuit of amplifier blocks in figure 8.12.

The input resistance of the amplifier figure 8.13 is αfe/gm, due to the bias circuit (and for the BJT also the input resistance of the BJT). The output resistance is due to the collector or drain resistance, and the output resistance of the transistor. Both resistances are finite and will contribute to a (hopefully small) dissipation of energy.

A first try with a BJT If we use a single transistor amplifier circuit with a BJT, then the small-signal equiva- lent of this amplifier with feedback via impedances ZA and ZB is presented in figure 8.14. In it, the (influence of the) input and output resistances of the BJT, for simplicity.

ZB

+)

be

v

A

. vbe out

Z

r

m

g -)

Figure 8.14: Feedback

The circuit can oscillate if the total loop gain equals 1 for just one frequency. Arbitrar- ily defining the signal gain A from left to right and defining (then) the β from right to left, hence defining vin = vbe and vout = vc:

vin = vbe

vout = vc

A = −gm · (rout//(ZA + ZB)) Z β = A ZA + ZB

For a negligible (e.g. very high ohmic) rout we have:

Aβ = −ZA · s (8.6) 180 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

To get oscillation, the loop gain of (8.6) must be equal to 1. Because the transconduc- tance gm of a transistor is real and positive, this would require a negative resistance for 8 ZA. Conventional passive components cannot perform such a function .

Problems with calculating loop gain You can run in to some strange problems while determining the loop gain. The cause is that a loop has to be calculated: the input signal of the circuit is created by the circuit itself. There are multiple valid methods available for determining the loop gain:

• define an A and β and start working on them separately. What you select for A and β does not matter, only the product of the two matters. If the A and β form a loop, then you’re fine.

• calculate Aβ all at once. This removes the need of defining an A and β but still needs to pick a (any!) node in the circuit to start your loop.

The figure below, subfigure a), shows a basic configuration for a single transistor oscillator with three feedback impedances ZA, ZB and ZC . We can define A as A = vout/vbe, and thus define β = vbe/vout which is perfectly fine as this A and β together form the loop. An obvious (but wrong) method for calculating A is then to use the circuit of b) in the figure below: force vbe and calculate vout:

vout ZC − gm · ZBZC Awrong ≡ = vbe ZB + ZC This method is incorrect since the impedance at the base node of the transistor has changed during the calculation... this was originally high ohmic (something with ZA, ZB,...) is 0Ω only for this calculation. The solution is very simple if you know it: control vbe in such a way, that you change nothing on the impedance levels anywhere in the circuits. It seems complicated, but it is surprisingly simple if you “cut” the  loop inside the transistor, and assume a driving vbe, see circuit c) in the figure below. ZB

+) +)

be

A v v v be . Z out Z C

s -) -)

a) ZB ZB

+) +) +) +) +)

be be

A A v v v v v’ v v be . Z out be be . Z out Z C Z C

s s -) -) -) -) -) b) c) Loop calculations: right and wrong

8If you thought it was possible: passive components do not provide any power, they can only store and dissipate power. Negative resistors supply power, thus they can inherently not be passive. 8.2. HARMONIC OSCILLATORS WITH HIGHER Q 181

Now we get:

vout Acorrect =  = −gm · (ZC //(ZA + ZB)) vbe

With a BJT... second try If we look at figure 8.14 more closely, and analyze it together with (8.6), then we find a number of interesting conclusions:

• the transconductance gm is real and positive.

• if ZA is made from passive components R, L or C, then the (vector) ZA is always in the first or fourth quadrant of the complex plane.

• the vector −ZA · gm is always in the second or third quadrant of the complex plane, hence it can never be equal to 1. To satisfy the oscillation condition with the given circuit, then we have to “rotate” the feedback voltage (vector) within the complex plane. In other words: we have to implement extra phase shift in order to have oscillation. The most efficient method for implementing extra phase shift is by adding an impedance ZC parallel to rout,as shown in figure 8.15.

Z vc +) B

be

C

v

A

v . be Z

Z

m

g -)

Figure 8.15: Single transistor with high Q: basic principle

The loop gain for the circuit of figure 8.15 is easily obtained by (for instance) defining 9 an A and β and work your way up from there. If rout may be neglected :

vin = vbe

vout = vc

A = −gm · (ZC //(ZA + ZB)) (ZA + ZB) · ZC = −gm · ZA + ZB + ZC Z β = A ZA + ZB ZAZC Aβ = −gm · ZA + ZB + ZC

9 Instead of neglecting, rout can also be captured in ZC . 182 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

We immediately see that the oscillation condition for this type of circuit configuration can also be written as

ZA + ZB + ZC = −gm · ZAZC (8.7)

This condition can be satisfied in various ways. The oscillators corresponding to these various methods have already been invented a long time ago by some smart men, after which the oscillators are now named. Further on, we will elaborately cover the Colpitts oscillator; the analysis of other comparable oscillators is similar.

Which ZA, ZB and ZC? We have derived above for the circuit configuration in figure 8.15 the oscillation con- dition can be rewritten into (8.7). This condition can be satisfied in a number of ways which is demonstrated below. For this,first a few observations are explicitly stated for simplification reasons:

• we always need ZB, otherwise there would not be any feedback.

• if ZB there also must be a ZA because otherwise the feedback factor would be exactly 1, frequency independent, 1.

• if we may not use a negative resistance, then we need a ZC .

• for high Q oscillators the components ZA...ZC are not resistors, but they may have (in general) a small resistive component, due to non-idealities and the par- allel to rin and rout of the transistor.

From the above enumeration, we find the following possibilities for ZA...ZC :

• ZX is an inductance with a small resistance in series (or large parallel resis- tance). This is equal to Zx = jωL + Rseries in the complex (impedance) plane, which is in the first quadrant, close to the imaginary axis.

• ZX is a capacitance with a small resistance in series (or a large parallel resis- −j tance). The vector in the complex (impedance) plane is then Zx = ωC +Rseries, which is in the fourth quadrant, close to the negative imaginary axis.

Combining all this immediately reveals that that the vector ZA + ZB + ZC will always be in the first or fourth quadrant. To satisfy (8.7), the vector −gm · ZA · ZC must also be in the first or fourth quadrant. Since gm is positive and real this can only be realized if ZA and ZC are reactive components of the same type (both inductive or both capacitive). The most obvious choices for ZA and ZC are both capacitive or both inductive. In both cases, we need to set a correct ZB and suitable gm to satisfy (8.7). The vec- tor diagrams of both implementations are given in figure 8.16a and b10. A number of single transistor oscillators is listed in the table below; these all satisfy (8.7). The various implementations are named after their discoverers: Clapp, Colpitts, Hartley,

10The real part of the reactive components in figure 8.16 is drawn out of proportion for illustration purposes. 8.2. HARMONIC OSCILLATORS WITH HIGHER Q 183

ZB(ind) Z (ind) im(Z) A

ZC(ind) im(Z) s·ZZ

AC re(Z) =

ZZZ++ = = ABC = ZZZABC++

s·ZZAC re(Z)

ZC(cap)

Z (cap) a) A b) ZB(cap)

Figure 8.16: Impedances satisfying (8.7): (a) Z A and ZC capacitive (b) ZA and ZC inductive

Meacham, Butler, Miller, Seiler and Pierce. Very few people actually know who dis- covered which, but the first three are displayed in table below. name ZA ZB ZC Colpitts C L C Clapp C L+C C Hartley L C L 184 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

The Colpitts oscillator

Figure 8.17 shows a circuit of an oscillator where ZA and ZC are both capacitive, and ZB is inductive: the Colpitts oscillator. Compared to the idealized circuit schematic shown earlier, some extra components are included for biasing purposes and to de- couple DC-signal levels. For analysis purposes, we now first construct a small signal equivalent circuit, assuming low ohmic (at oscillation frequency) Ck and CE.

C RC C

RB1 ZB LB LB Ck a

be

v

// s .

s rin B1 B2 CA RC CC

RR

RB2 C Z Z A RE CE A C

Figure 8.17: Colpitts oscillator with replacement circuit

For single transistor oscillators (8.7) holds; working out this relation:

ZA + ZB + ZC = −gm · ZAZC with

αfe 1 ZA = RB1//RB2// // gm jωCA ZB = jωLB 1 ZC = RC // jωCC gm =40· IC

α If the resistance of the bias circuit is negligibly large, hence for R 1//R 2 >> fe B B gm then we obtain a somewhat more readable equation. If we then introduce the symbols τx for the cutoff points of ZA and ZC , then we get:

αfe αfe gm RC s RC −gm · · = + jωLB + (8.8) 1+jωτA 1+jωτC 1+jωτA 1+jωτC αfe τA = · CA gm τC = RC · CC

This is a complex equation (in the sense that there are “complex numbers present”), hence it has two parts: a real and imaginary part. Expanding and simplifying using (real parts)=0 and (imaginary parts)=0 and using

αfe −αfe · RC = (1 + jωτC )+jωLB(1 + jωτA)(1 + jωτC )+RC (1 + jωτA) gm 8.2. HARMONIC OSCILLATORS WITH HIGHER Q 185 yields the two equations

αfe 2 −αfe · RC = − ω LB · (τA + τC )+RC (Re part) gm αfe 2 0= · τC + LB · (1 − ω τAτC )+RC · τA (Im part) gm Substituting the time constants

2 αfe αfe ω LB( CA + RC CC )= +(1+αfe)RC gm gm 2 gm ω LB · CA · RC · CC = LB + CA · RC + RC · CC αfe

This gives a few very ugly expressions that, however, become quite simple for αfe → ∞. Note that this assumption assumes the input impedance of the BJT to be negligibly large compared to the impedances of CA and LB at the oscillation frequency. Now, solving the imaginary part of the equation — actually setting Aβ to a real value — yields the oscillation (angular) frequency: ∼ (CA + CC ) 1 ωo = = (8.9) LBCACC LB(Cseries)

From the other relation — setting Aβ =1at the oscillation frequency — we find a condition for the value of the circuit elements: 2 · ∼ αCA ∼ gm RC = · − 2 = CA/CC (8.10) α CACC CC

Designing an oscillator is similar to doing an analysis, but the other way around. Below, a simplified design procedure for a Colpitts oscillator at 10 MHz is shown, 7 corresponding to ωo =2π · 10 rad/s. In the design, CA, LB, CC and gm must be dimensioned. Because there is only one hard requirement — the oscillation frequency — yet 4 values to choose, there are 3 more of less arbitrary values to select. The choice for a suitable ZLB can be made based on the practical impedance levels and a practical value of the component:

• a high value for |ZLB| causes much parallel impedances to come in to play, influencing the oscillation behavior. Also, the effects of RB1, RB2 and α/s may not be neglected either.

• for a high value of |ZLB|| the value of the capacitors CA and CC need to be small to reach ωo. In practice, there is a lower boundary for the available value of these capacitors that then also limits the upper value for LB.

• For low impedance levels for |ZLB||, we have similar unwanted side-effects. LB is small for low values of |ZLB|, causing the effect of series resistances and parallel capacities to increase. At the same time, the effect of series inductances of the capacitors is larger. 186 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

A “practical” value of LB can be 1 μH for which ZL = j63Ω. This results, for the capacity in series, in a value of about 250 pF which is −j63Ω at the oscillation frequency. The voltage gain factor gm · RC can be arbitrarily chosen at 10, resulting in CA =10· CC , CA ≈ 2.75 nF and CC ≈275 pF.

At startup, the loop gain of a harmonic oscillator should be a little larger than 1, and must decrease to exactly 1 once a suitable oscillation amplitude is reached. In a Col- pitts oscillator as shown in figure 8.18, amplitude control may be implicitly imple- mented via a clamping action: the combination Ck and the BE-junction acts as a clamp circuit that decreases the base voltage with increasing amplitude11.

VCC

RC

RB1

CC

C k LB

RB2 CA RE CE

Figure 8.18: Colpitts’ amplitude control via Ck and the BE-junction

Another way to efficiently decrease the gain is having the sine (softly) clipping to one of the supply rails.

The Clapp oscillator The effect of the Miller capacitance can be a nuisance in a Colpitts oscillator; this ca- pacitance is in parallel to CA. As a consequence, the value of CA cannot be smaller than the value of the Miller capacitance, making it very difficult to achieve high os- cillator frequencies. A solution for this is given in the circuit below, which uses an inductive component in series with a capacitor CB.

11 The combination CE and the BE-junction is also a clamp, but due to the low resistivity of RE , this clamp will have a much smaller time constant, so it will not be dominant. 8.2. HARMONIC OSCILLATORS WITH HIGHER Q 187

VCC

RC

RB1

CC

C k LB

RB2 C miller CA RE CE CB

The Clapp-oscillator

Using the oscillation condition, we find ∼ 1 1 1 ωo = + + (8.11) LBCC LB(CA + CMiller) LBCB

The Hartley oscillator

The circuit given below is an implementation where ZA and ZC are inductive, and ZB is capacitive, named after Mr. Hartley, who originally designed a vacuum oscillator with this principle. UCC

RC

RB1

Ckout LC

Ck

CB RB2 L RE CE A

A Hartley-oscillator

The derivation of the oscillation frequency and required voltage gain is analogous to that for the Colpitts circuit; the oscillation frequency is: ∼ 1 ωo = (8.12) (LA + LC )CB while for the gain it follows that

∼ LC gm · RC = (8.13) LA 188 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS

8.2.2 Crystal oscillators

Quartz crystals — of crystalline SiO2 — are very well suited for creating very frequency- stable oscillators. Quartz is (weakly) piezo-electric, meaning that there is some inter- action between the electrical and mechanical domain. The mechanical characteristics as expansion coefficient and modulus of elasticity are not very sensitive to temperature, and there is very little internal friction. When the crystal is provided with an electric AC voltage, then it will vibrate mechanically. The electric behavior is very stable, due to these stable mechanical parameters.

Z (inductive)

Ls C’ 0 w w R ws p

Cs (capacitive) ()capacitive a) b) c)

Figure 8.19: (a) symbol for a crystal (b) electric equivalent circuit (c) resonance curve

A crystal with electrodes behaves, electrically speaking, like a one port system, with a frequency characteristic similar to the one shown in figure 8.19c; the electric model of figure 8.19b shows an equivalent. A resonating crystal has a shockingly high quality factor: a Q-factor of over 105 is quite common. The crystal has two modes of oscillation:

• series resonance occurs for a voltage controlled ( and thus virtually short cir- cuited) case. Then C has no influence and the resonance frequency is: 1 ωs = √ LsCs

• parallel resonance occurs for the current controlled situation (hence with a vir-  tual open connection). The total loop contains Cs and C in series, and the resonance frequency is: 1 ω =  p Cs·C L s Cs+C The parallel resonance frequency is slightly higher than the series resonance Cs·C frequency, since

The numerical values of the parameters L and C of a quartz crystal with high Q are somewhat unusual. For example, in the model of a crystal with a (series resonance) frequency of 10 MHz, the parameter Ls as an equivalent to the mechanical mass, is 12 mH, which is relatively large. At the same time, Cs, the equivalent of the (inverse of the) mechanical stiffness is a very small 33 fF (33 · 10−15 F). R is 5Ωand represents the internal (viscous) friction. The value of C, the electrostatic capacity between the electrodes, is about 7 pF. This gives a quality factor Qs of 145.000 for the mesh containing Ls, C and R.

The series resonance frequency and parallel resonance frequency, ωs and ωp respec- tively, are usually very close to each other, due to the (generally) very small value of Cs. The crystal can very well be implemented in a Clapp circuit, due to the resonance characteristic of Ls and C, by replacing the series LB and CB by the crystal. The new circuit is called a Pierce oscillator.

Oscillator circuits with crystal The inductor of a Colpitts oscillator can be replaced by a crystal, leading to the circuit in figure 8.20. The crystal is inductive in just a very small frequency band, and in this very narrow frequency band the inductor value can assume any inductive value. Therefore somewhere in this very narrow frequency band the oscillation condition (concerning having a real Aβ) will be satisfied12.

VCC

RC

RB1

CC

X-tal Ck

RB2 CA RE CE

Figure 8.20: Crystal parallel resonator: the crystal replaces an inductor.

12Clearly the amplifier has to provide sufficient gain to also satisfy |Aβ| =1. 190 CHAPTER 8. POSITIVE FEEDBACK: OSCILLATORS Chapter 9

Basic internal circuits for op amps

9.1 Introduction

In chapters 6-8, a lot of circuits have been analyzed, synthesized and shown, which use operational amplifiers (op amps). In those chapters, a lot of attention went into the external behaviour (bandwidth, amplification, output impedance, etc) of an op amp, but little to none went into their internal structure. In this chapter, the internal circuits of an op amp are discussed. As seen before, there are many requirements for an op amp, including:

• an op amp has to operate on a (typically small) differential input signal. In other words: it must have a differential input stage, which works especially well for small signals.

• the voltage gain of the op amp must be very large.

• the input impedance must be sufficiently high.

• the output impedance must be sufficiently low.

• the transfer function must be first order dominant, due to the possibility of insta- bility when operated by persons that are not paying attention (or insufficiently educated).

• any non-ideal properties, like offset and noise, must be as small as possible.

With a circuit containing only a single transistor, as discussed in chapter 5, one cannot comply to all these requirements at the same time. The biggest issues for these simple circuits are the lack of a differential input stage for small signals, and combining a high gain with a sufficiently low output impedance. Like in chapter 5, conflicting requirements can be circumvented by using multiple amplification stages1. A number of the stages that may be used in an op amp are identical to certain circuits from chapter 5, while others are specifically catered to their specific task in the op amp. The general construction of an op amp is shown in figure 9.1: an input circuit that satisfies all the input requirements, an output stage

1Mathematically speaking, this is similar to adding degrees of freedom to a system, that thereafter – as the word implies – can be used to obtain one’s goals.

191 192 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS that does the same for the output and something in between that completes the total behaviour.

v 1 amplification differential stage power stage with combined with vOUT with conversion low impedance conversion of of v to i output v2 i to v

Figure 9.1: General form of an operational amplifier

First of all, the input stage will be discussed in §9.2. This circuit is unlike any other that has been dealt with in this book so far. Input stages usually have a differential output current, that is often converted to a single-ended current to be processed further. This (possible) conversion is dealt with in §9.3. The output current of the input stage is amplified and converted into a voltage in an intermediate stage; these stages are discussed in §9.4. To correctly operate an external load, output stages are often needed. These are discussed in §9.5. Finally, bandwidth limitations are dealt with in §9.6.1. 9.2. THE INPUT STAGE 193

9.2 The input stage

The input stage of an op amp must meet a number of requirements, which follow from the practical applications of an op amp. The input stage:

• must operate on a (typically small) differential input voltage, which can be either positive or negative.

• should not be sensitive for the non-differential (a.k.a. common) input signal. This means that a differential signal {v1 =0.001,v2 = −0.001} must yield the same result as a differential signal {v1 =2.001,v2 =1.999}.

• must have a sufficiently high input impedance. This requirement is kind of vague, because “sufficiently high” is dependent on the op amp’s application and because the input impedance as seen by an input source is very dependent on the feedback around the op amp.

From the first requirement, it follows that most circuits as seen in chapter 5 cannot be used as input stages for an op amp: they do operate on small signal variations, but when looking at large signal behaviour we see that bias voltages are required. In the left circuit of figure 9.2, this problem is shown. Only around a specific vDIFF — approx. 0.6V in this case — this circuit might operate reasonably as a differential stage, but noting that 0.6V is nowhere near “small” signals around 0V, this is a very poor solution.

iC iC

v-=vCOMMON +v DIFF/2 v=v- COMMON +v DIFF/2

v+=vCOMMON -v DIFF/2 v+=vCOMMON -v DIFF/2

Figure 9.2: Attempts to realize a differential stage with a single transistor circuit.

The problem illustrated above can be dealt with by using decoupling capacitors: in that case, the input voltage doesn’t need to be close to the bias level. For example, we might bias the transistor using the right hand circuit in figure 9.2, while the input voltage variations are added on top of the bias levels. However, these types of circuits tend to have a low gain for low frequencies (like, for example, 0Hz). This is very disadvantageous for op amps. 194 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

9.2.1 Symmetry requirement As shown before, a single transistor circuit cannot implement a decent differential stage. From the definitions of “differential” and “common”:

vDIFF = v+ − v− v+ + v− v = COMMON 2 it can be seen that the differential input signal that is to be amplified, is symmetrical around the common part of the input signal. To properly (symmetrically) operate on something that is symmetrical, one usually needs something that is also symmetrical. A single transistor circuit, sadly, is not. There are multiple ways to make a symmetrical circuit, which can operate on small differential input voltages, and a symmetrical output current, which is a symmetrical function of the input voltage. This book will deal with a few relatively simple circuits; more complex circuits are typically variations on these basic circuits.

9.2.2 First implementation: large signal behaviour Assuming the use of asymmetrical components (like MOS transistors, BJTs or any other electrically amplifying component), it is possible to realize symmetric function- ality with 2 of those components. From the requirements for symmetry and the use of transistors, it follows immediately that (in the case of a BJT):

• both input signals must be connected to either the emitters of the transistors, or to their bases

• the signal path through the input stage (for example from the positive input to the negative input) must contain both a vBE and a −vBE.

A possible implementation of this is given in figure 9.3, shown with NPNs and PNPs. The transfer function from differential input voltage to differential output current is given for the NPN version:

iOUT = iCl − iCr · q vBEl iCl = IC0 · e kT · q vBEr · kT iCr = IC0 e · · q vBEl q vBEr iOUT = IC0 · e kT − e kT · · − · q vCOMMON q vDIFF q vDIFF = IC0 · e kT e 2kT − e 2kT (9.1)

This circuit does have a few problems. First of all, the current in the circuit is strongly dependent on vCOMMON, because for small differential equations the following holds: · · q vBEl q vBEr iTOT = iCl + iCr = IC0 · e kT + e kT · q vCOMMON ≈ 2IC0 · e kT 9.2. THE INPUT STAGE 195

This strong dependency of the current iTOT on vCOMMON is also the cause for the second problem of this circuit: the required transfer function iOUT (vDIFF ) is a func- tion strongly dependent on vCOMMON. This follows from inspecting (9.1), and also from a simple calculation of the small signal transfer function for small vDIFF :

idiff ∂iOUT H = = = gm vdiff ∂vDIFF · q q vCOMMON g = · I 0 · e kT m kT C

v+=vCOMMON +v DIFF/2 v-=vCOMMON -v DIFF/2

iCl iCr

iCl iCr

v+=vCOMMON +v DIFF/2 v-=vCOMMON -v DIFF/2

Figure 9.3: First implementation of a differential stage with BJTs (for MOS: replace BJT with MOS).

We can conclude that the circuit in figure 9.3 is indeed a differential stage, but also that it has a number of disadvantages that follow from its strong dependency on the common current which results from the common input voltage. The MOS equivalents of the circuits in figure 9.3 have virtually the same behaviour as the BJT versions that are discussed: these, too, have a strong dependency on the common input signal. Interestingly, the MOS versions have an output current that is proportional to the differential input voltage. 196 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

9.2.3 Second (or actual) implementation: large signal behaviour The circuit shown in figure 9.3 suffices as an input stage, if we make sure that the (bias) currents through both transistors are independent on vCOMMON. The usual way to make a current independent on a voltage is to use a current source. This results in the following input stages (which are obviously equal):

ITAIL ITAIL

v+=vCOMMON +v DIFF/2 v-=vCOMMON -v DIFF/2 v+=vCOMMON +v DIFF/2 v-=vCOMMON -v DIFF/2

iCl iCr iCl iCr

iCl iCr iCl iCr

v+=vCOMMON +v DIFF/2 v-=vCOMMON -v DIFF/2 v+=vCOMMON +v DIFF/2 v-=vCOMMON -v DIFF/2

ITAIL ITAIL

Figure 9.4: Input stages that react to differential (input) signals: differential stages.

In this book, the large signal transfer function is virtually never used. A few properties of this transfer function, like slope around vDIFF =0and maximal output current, are presumed to be easily calculated.

The large signal transfer function of the differential pair of BJTs might be calculated as follows (for ideal transistors):

iOUT = iCl − iCr · q vBEl iCl = IC0 · e kT

iCr = ITAIL − iCl 1 vBl = vCOMMON + vDIFF 2   1 kT iCr vEl = vCOMMON − vDIFF − ln 2 q IC0 · q vDIF F ITAIL − iCl iCl = IC0 · e kT · IC0 · q vDIF F e kT = · · ITAIL q vDIF F 1+e kT 1 = · · iCr ITAIL q vDIF F 1+e kT · q vDIF F   e kT − 1 q · vDIFF iOUT = ITAIL · · = ITAIL · tanh (9.2) q vDIF F 2 e kT +1 kT

This large signal transfer function is indeed both symmetrical, and independent on vCOMMON. By taking the derivative of this transfer function the transconductance of the input stage (here around vDIFF =0) = q · can be calculated: gm kT IC . Of course, you might also calculate that much more easily by using a small signal equivalent circuit. 9.2. THE INPUT STAGE 197

The large signal transfer function of an input stage with MOS transistors may also be calculated, requir- ing a bit more effort than the bipolar variant. Roughly, and assuming that the MOS transistors are in saturation, it could follow that:

iOUT = iDl − iDr 1 = ( − )2 iDl 2K vGSl VT

iDr = ITAIL − iDl 1 = + vGl vCOMMON 2vDIFF

1 2i v = v − v − V − Dr Sl COMMON 2 DIFF T K  2 1 2(I − i ) i = K v + TAIL Dl Dl 2 DIFF K

1 2 2(I − i ) = Kv + Kv TAIL Dl + I − i 2 DIFF DIFF K TAIL Dl

nd This is a slightly dodgy equation, which can be simplified into a 2 order equation in iDl. We could solve it using (for example) the abc formula. A possible outcome is:

1 2( − ) = 2 + ITAIL iDl + − ⇔ iDl 2 KvDIFF KvDIFF ITAIL iDl K 1 2 I Kv 2(I − i ) i − Kv − TAIL = DIFF TAIL Dl ⇔ Dl 4 DIFF 2 2 K   1 2 2 2 − · + 2 + ITAIL = KvDIFF · ⇔ iDl iDl ITAIL 4KvDIFF 2 2 ITAIL   1 2 2 − · + 2 4 − KvDIFF · + ITAIL2 =0⇒ iDl iDl ITAIL 16 K vDIFF 4 ITAIL 4

1 1 = ITAIL ± 2 · − 2 4 iD1 2 2 KvDIFF ITAIL 4K vDIFF (1 solution suffices)

I Kv 4I = TAIL + DIFF TAIL − v2 2 4 DIFF K Kv 4I i = DIFF TAIL − v2 (9.3) OUT 2 K DIFF This is a true abomination, which even has a certain validity region, related to the square law... Luckily, you only need to remember a few things about it, and you only need to be able to calculate everything in the small signal case.

For the 4 circuits given in figure 9.4, the large signal transfer functions are kind of ugly, kind of complex and also not very relevant (which is nice). When using these input stages in an op amp, only two situations usually occur2:

• if the op amp is operating linearly, and the gain is large, the input signal of the op amp (which is connected to the input stage) will be very small. The that case, only the behaviour of the input stage around vDIFF =0is important. This is the small signal behaviour. It is usually too much work to calculate small signal

2The transition between these two situations usually only occurs for a very short time. If not, you are dealing with either a bad design or a very advanced application. In both cases, the subject is beyond the scope of this book 198 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

aspects through a large signal derivation3.

• if the op amp is operating non-linearly, the input signal will be relatively large. Also in that case, the large signal transfer function doesn’t need to be used, because the output current for this situation will virtually always reach a certain limit. It is usually sufficient to know this limit.

Because the input stage is designed to operate on small (differential) signals, these types of circuits are usually called differential stages (or, in the case where 2 transistors are used: differential pairs). Because differential stages are structured symmetrically, their behaviour around the point of symmetry (vDIFF =0) will be symmetrical. With small input signals, this translates into reasonably linear behaviour. With large voltage differences vDIFF , one of the transistors will conduct virtually no current, and the other almost all available current. In other words: with large input signals, the differential output current will be ±ITAIL. Because the circuit is properly constructed and supplied, the total behaviour will be a neat combination of the extreme situations described above (being vDIFF =0en |vDIFF |-large).

I I TAIL TAIL i iCr Dr

] ]

A A [ i [ FF Cl FF I I iDl

D 00D

i i

-ITAIL -ITAIL -0,15000,15 -0,15 0,15

vDIFF [V] vDIFF [V]

Figure 9.5: Large signal transfer function of a BJT differential pair (left) and a MOS differen- tial pair (right). For both versions, the currents through the transistors are also given (the top 2 curves); for the MOS version the region of validity from (9.3) is shown.

In figure 9.5, the (large signal) transfer functions of a BJT differential pair and of a MOS differential pair are given. Both are S-shaped curves, which have ±ITAIL as maximum and minimum. However, there are also differences between the large signal behaviour of BJT and the MOS versions:

• With the BJT differential pair, the largest values (iOUT = ±ITAIL) are reached asymptotically, where in the MOS pair these values are reached when one of either MOS transistors conducts all current.

• The large signal behaviour of a BJT differential pair is not dependent on a prop- erty of the BJT, as long as both transistors are equal. In the MOS case, the curve depends on the K factor of the transistor. A lower K factor leads to a weaker

3The only exception that comes to mind is analyzing a current mirror. 9.2. THE INPUT STAGE 199

S-curve. Are you wondering how much weaker? Take a look at the validity re- gions of (9.3). Note that this is strongly related to the transconductance relations for a BJT and for a MOS transistor, where those of a MOS transistor are also dependent on the K factor.

As can be seen from the curves in figure 9.5, the currents through the individual tran- sistors are scaled and shifted (by ITAIL/2) versions of the total output current. This is purely a result of the symmetry in the circuit.

9.2.4 Small signal behaviour Differential pairs are used as input stages for op amps. For op amps that are used in linear (non-switching) applications, the input signal is usually very small, and the small signal behaviour of differential pairs is important. For two N-type differential pairs, both the real schematic and its small signal equivalent are given in figure 9.6. These will be used to derive a number of small signal properties of differential pairs.

iCl iCr

/2 i i

/2 cl cr

/2

bel ber

/2

a v v a

. .

diff

diff

m m

-v v g g m g g m

COMMON DIFF

COMMON DIFF I

=v -v =v +v TAIL

-

+

v

v

iCl iCr

/2 i i /2 dl dr

/2

gsl gsr

/2

v v

. .

diff

diff

m m

-v

v

g g

COMMON DIFF

COMMON DIFF

=v -v I -

=v +v TAIL v

+

v

Figure 9.6: differential pairs (in this case: N-type) and their small signal equivalent circuits.

Note that the MOS version of the small signal equivalent circuit has a floating voltage point: two current sources are connected to it. It may seem strange that two current sources are connected in series, but because they are operated by voltages, it is possible (and even surprisingly simple in derivations)4.

BJT differential pair: transconductance One of the hardest things to calculate, when it comes to differential pairs, is the transconductance. Hard, because it’s a strange derivation where it’s easy to make

4In derivations, everything that has “0” (or its alter ego “∞”) is very easy, because lots of things tend to disappear. On calculators, however, it is often a bit more difficult because they are bad at handling “∞” or “0”. 200 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS mistakes. In the case of the small signal equivalent circuit for the BJT differential pair in figure 9.6, a derivation for vDIFF ≈ 0 might be:

idiff gm,diffpair = vdiff idiff = gm · vbel − gm · vber = gm · (vbel − vber)

(vbel − vber)=vdiff

gm,diffpair = gm

Would you like a different derivation? That can be arranged:

idiff gm = vdiff idiff = gm · vbel − gm · vber α vbel = ibl · gm α vber = ibr · gm ibl = −αibl − αibr − ibr ⇔

ibl = −ibr 1 v = −v = v bel ber 2 diff gm,diffpair = gm (9.4)

When put into words, this states that the transconductance of a BJT differential pair is equal to the transconductance of the individual transistors in the pair. It can be easily calculated, that this has to be the case in these kinds of symmetric circuits. The nice part is that icl = −icr, so these currents don’t even influence the voltage drop over the small signal resistances between base and emitter (the ones having value α/gm). Do you want yet another derivation? Here you go:

ic1 − ic2 gm = vd1 − vd2 ic1 = gm · vbel

vbel = vd1 − ve α ve = vd1 − · ib1 gm ibl = −αib2 − gm · vbel − gm · vbe2 vd2 − ve i 2 = g · b m α vbe2 = vd2 − ve ...substitute, work out...

gm,diffpair = gm 9.2. THE INPUT STAGE 201

BJT differential pair: input impedance

The input impedance of a differential pair in equilibrium, meaning with vDIFF ≈ 0V, can be easily derived in a way similar to the one discussed above:

vdiff rin = iin g g i = v m = −v m in bel α ber α vbel = vdiff + vber g i = v m in diff 2α 2α rin = (9.5) gm

Because icl = −icr, you only “see” the two base resistances in series at the input of the differential pair. Note that, in a derivation, it has to be shown that icl = −icr!Itis good practice to calculate the input impedance for vDIFF =0 V.

MOS differential pair: transconductance

Regarding small signal transfer function, a MOS transistor is not very different from a BJT that has a very high current amplification factor, which can be modelled as α →∞. To save paper, I will stop at giving a calculation for the transconductance of a MOS differential pair from (9.4): gm,pair = gmtransistor.

MOS differential pair: input impedance

Consider the previous derivations; from (9.5), it follows that rin →∞. It is easily to show that idl = −idr is for a MOS differential pair, because there is no other possibility5.

9.2.5 Small signal behaviour with a non-ideal current source In §9.2.4, the (ideal) behaviour of differential pairs was analyzed. These differential pairs use an ideal current source, which is usually called a tail current source. In real circuits, a (tail) current source is made using, for example, a MOS transistor with a constant VGS (see §5.2.2), or a resistor. In the figure below, an NMOS differential pair with a non-ideal tail current source is shown, together with the matching small signal equivalent circuit. Because the small signal transfer function of a common signal will also be calculated, it shows two separate sources6.

5With a symmetrical BJT pair, there is also no other possibility, but it doesn’t directly follow from the small signal equivalent circuit. You could still calculate it. A different possibility, in this case, is to note that a small signal equivalent circuit is linear, and therefore has 1 solution. Then, you could assume that icl = −icr and show that this assumption leads to a valid (non-conflicting) result. 202 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

iCl iCr

/2 i i

/2 dl dr

gsl gsr

/2 /2 + +

v v

. .

diff diff v v

v gsl m m gsr

-v

g g COMMON DIFF - -

COMMON DIFF

=v -v

-

=v +v

v

+ v ITAIL rtail

common

common

v

v

An NMOS differential pair with a non-ideal tail current source and the small signal equivalent circuit.

The transconductance of the differential pair is easily calculated from the small signal equivalent circuit. Here of course, the common sources supply 0V and don’t need to be included. The resulting expression is (again):

gm,diffpair = gm

where the transconductance of transistors depends on the bias current of the transistors, and therefore dependent on the total tail current. And this tail current is dependent on the common input signal, due to the term rtail.

The effect of rtail and the common input signal on the output current is easily calculated for vDIFF = 0V. Purely on grounds of symmetry in the circuit and in the input signals, the drain currents of the transistors in the differential pair will be identical. The output current iuit = idl − idr is therefore equal to 0. If the circuit is asymmetrical for any reason, for example due to unequal transistors in the pair or because vDIFF =0, there is also an output current component resulting from vCOMMON.

The effect of rtail on an asymmetrical differential pair is quite a lot harder to calculate, because there is no symmetry, and therefore everything has to be derived. This is a lot more work, which can be slightly reduced when we have some prior knowledge; this is shown below.

id1 id2 id1 id2

gs2 gs2

/2 /2

/2 /2

+ v v + + v v +

. . . .

1 gs1 2 1 gs1 2

diff diff diff v v diff v v

v v gs1 m m gs2 gs1 m m gs2

-v -v

g

g g - - - g -

rtail rtail

common common

common

common

v v

v v

The small signal equivalent circuit for an asymmetric differential pair with a non-ideal tail current source: left side for differential calculations, right side for common calculations.

Differential behaviour An analysis of the differential behaviour of an asymmetric differential pair, with a non-ideal tail current source, is given below. The small signal equivalent circuit, given above, is used. For this particular

6Of course, there are 4 sources of which 2 have the same value as the other 2: effectively, there are only 2. If I had drawn it differently, only 2 would be visible, or 3, or 5, or 382, or... 9.2. THE INPUT STAGE 203

circuit:

vdiff = vgs1 − vgs2 1 = +( · + · ) · 2 vdiff vgs1 gm1 vgs1 gm2 vgs2 rtail 1 − = +( · + · ) · 2vdiff vgs2 gm1 vgs1 gm2 vgs2 rtail Substituting these expressions into each other yields:

1 1+2g 2 · r vgs1 = · m tail v diff 2 1+(gm1 + gm2)rtail −1 1+2g 1 · r vgs2 = · m tail v diff 2 1+(gm1 + gm2)rtail

For variations in the drain current id1 and id1, it follows immediately that:

1 gm1(1 + 2gm2 · rtail) id1 = gm1 · vgs1 = · vdiff · 2 1+(gm1 + gm2)rtail

−1 gm2(1 + 2gm1 · rtail) id2 = gm2 · vgs2 = · vdiff · 2 1+(gm1 + gm2)rtail

If an ideal current mirror is used to make an output current iuit =(id1 − id2), it follows for this output current that:   4gm1gm2rtail (gm1 + gm2) 1+ 1 gm1+gm2 id1 − id2 = · vdiff · 2 1+(gm1 + gm2)rtail ∼ gm1 · gm2 = 2 · vdiff · (9.6) gm1 + gm2

The non-ideal behaviour of the current source ITAIL is not a big problem for the differential signal. For 7 (gm1 +gm2)rtail >> 1 — which can be easily justified — the differential current id1 −id2 is practically 8 independent of rtail. When we use identical transistors , we can state that gm1 = gm2. In that case, both drain current variations are exactly each other’s opposites, and they are truly independent of rtail.

Common behaviour In the case of an asymmetric differential pair, an output signal can exist which is caused by the common part of the input signal vcommon. This is due to the asymmetry. The derivation of the corresponding trans- fer function is given below, using the small signal equivalent circuit from the right side of the previous figure. From the small signal equivalent circuit, it follows that vgs1 = vgs2, and therefore

gm1 id1 = gm1 · vgs1 = vcommon · 1+(gm1 + gm2) · rtail gm22 id2 = gm2 · vgs2 = vcommon · 1+(gm1 + gm2) · rtail

The output signal caused by vcommon is

gm1 − gm2 id1 − id2 = vcommon · (9.7) 1+(gm1 + gm2)rtail Equation (9.7) shows, that asymmetry in a differential pair results in a transfer function from common input voltage to (intended) differential output current, which is nonzero9. Because the output signal of a differential pair, ideally, is only a function of the differential input signal, the output signal caused by (9.7) is unwanted. 204 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

10If this requirement is not met, there has undoubtedly been a design error. In differential pairs, the gm,dif f.pair is practically always chosen to be a lot larger than that of the tail current source, gm,tail. And because the potential gain μ = gm · ro of any reasonable transistor is way larger than 1, it follows — if no mistakes were made — that (gm1 + gm2)rtail >> 1. 11By stating that the transistors are identical, in this case, it is meant that they have the same small signal properties. For physically identical transistors, the biasing must also be equal, also meaning VDIFF =0. 9Often, a transfer function which has value 0 is confused with “no transfer”... Which is nonsense. You could calculate a transfer function for anything, only the value of most transfer functions would be 0. For example, the transfer function from the number of biscuits I give to my dog, to the output voltage of my amplifier, is 0 [V/dog biscuit]. 9.3. FROM INPUT STAGE TO INTERMEDIATE STAGE 205

9.3 From input stage to intermediate stage

In §9.2, input stages were discussed, and these input stages are virtually always dif- ferential pairs as shown in 9.4, or variations on them. For these kinds of differential stages, the transfer functions have been calculated in multiple ways, always assuming that the output signal is id1 − id2: a differential output current. These differential output currents are then (in an arbitrary order) amplified and converted to a voltage. Because usually, the differential output current consists of 2 currents, and the output signal of the op amp is 1 voltage, somewhere there has to be a transition from differential to “single-ended”. This is the subject of this paragraph, and it is discussed using a MOS differential pair as an example. In figure 9.7, an NMOS differential pair is displayed with several forms of conversion from differential current to single current.

iDl-iDr iOUT

v+ v- v+ v-

ITAIL ITAIL

a) b)

IX IX IX

IX-iDr

iOUT

i i i i v+ v- v+ Dl Dr v- Ml Mr

ITAIL ITAIL

c) d)

Figure 9.7: An NMOS differential pair, and 4 ways to make a single-ended output current out of a differential current.

For the recognizability between various parts of the circuits, known building blocks are displayed over a grey background. Note that this is the brute force method which we are familiar with: dissecting complex problems into smaller, less complex parts, which are easier to understand. 206 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

Throwing away one half The simplest way to make a single-ended current from a differential current, is to sim- ply throw away one of the differential current components. This has been done in the simplest possible way in figure 9.7a. For an ideal differential pair, it follows from the derivation in §9.2.4, that the output current is given (in a small signal approximation) by: I g i = TAIL − v · m (9.8) OUT 2 diff 2 Note that the minus sign is used because we chose to use the transistor connected to the negative input. This yields a minus sign because of the direction in which iOUT is defined.

Throwing away one half: part 2 A possible disadvantage of the circuit in figure 9.7a, is that the output signal contains a fairly large DC current component, which can lead to problems in the intermediate stages. A solution for that is given in figure 9.7c: throwing away half of the differential signal, and then compensating for the DC term by adding a DC current source. That leads to the following output signal:

= · gm iOUT vdiff 2 (9.9) In reality, a small DC current term will still exist, due to a mismatch between the tail current source with = 1 value ITAIL, and the compensating source with (ideally) the value IX 2 ITAIL. 9.3. FROM INPUT STAGE TO INTERMEDIATE STAGE 207

Subtracting currents To use the differential output current of a differential pair, these 2 current components need to be subtracted from each other. Adding currents is easy, as is reversing a cur- rent’s direction (with the knowledge from §5.2.3. Therefore, subtracting currents from one another is also easy: first we can invert one component with a current mirror, then we can add both resulting currents. A corresponding circuit is shown in figure 9.7b. The output current is (assuming an ideal differential pair and an ideal current mirror):

− iOUT = iDl iDr I g I g = TAIL + v · m − TAIL − v · m 2 diff 2 2 diff 2 = vdiff · gm (9.10)

Note that in this case, for correct operation of the current mirror, the output voltage may not be higher than a certain value, because otherwise the right hand transistor in the mirror comes out of saturation. For similar reasons, the output voltage has a lower limit. This is, of course, no different from the definition of validity boundaries when assuming that transistors are always saturated. The derivation of these boundaries can be done simply by noting the required voltage drops in the circuit, as is done in figure 9.8. It follows directly from this figure, that for this circuit vCOMMON − VT < VBIAS

GSm Tm

>v -V

+) +

load

GS T vOUT R V + DD >v -V -) - vGS vd

- + V vCOMMON BIAS - I0

Figure 9.8: An input stage with addition of the (needed) voltages to derive the range of the output voltage. 208 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

Subtracting currents: part 2 In figure 9.7b, the differential currents are directly subtracted from each other, resulting in a nice output current which, however, poses some requirements to the DC output voltage level. A possible way to diminish these requirements, is to use a circuit like the one in figure 9.7d. For this circuit, the output current is given by:

iOUT = IX − iDr − iMr

iMr = IX − iDl

iOUT = iDl − iDr

= vdiff · gm (9.11)

Note that this yields the same output signal as when using the simple mirror as in figure 9.7a. The difference, as mentioned before, is that the output voltage range is now much larger, at the cost of some extra complexity. Are you convinced yet? Actually, if you calculate the permitted output voltage range, you will find that it might be better, but it might also be worse than that of figure 9.7a. This is because the voltage on the drains of the paired transistors doesn’t need to be “high enough” in this circuit. A solution is to (again) add extra transistors, meant to keep the voltage “high enough”:

IX IX

cascodes

iUIT i i u+ Dl Dr u- iMl iMr

ITAIL

uCAS

A folded cascode circuit.

If the currents through the cascode transistors are always positive, the drain potential of these transistors will be reasonably fixed around VCAS + VGS−cascode. The per- mitted input voltage range and output voltage range are now easily calculated, if (as a prerequisite) both transistors are assumed to be in saturation. This results in a large voltage range, at the cost of some extra complexity. Dividing this circuit into simpler subcircuits can help to understand (and perform calculations on) the circuit. 9.4. INTERMEDIATE STAGES 209

9.4 Intermediate stages

The input stage of an amplifier is usually a differential pair, which has a current as an output signal (for example via a current mirror). The output signal of an op amp is in the voltage domain: therefore, an amplification stage, which converts the current to a voltage, is needed. This is the function of the intermediate stage, which is dealt with below. Often (but not always), an output stage is required after the intermediate stage, in order to reduce the op amp’s output impedance. Those kinds of output stages are dealt with in §9.5.

iOUT,VP

v intermediate OUT vDIFF stage

ITAIL

Figure 9.9: An NPN differential pair, current mirror and intermediate stage.

Therefore the most important requirements for the intermediate stage are as follows:

• a large transresistance vout , to get a large overall gain factor for the op amp iin • an input voltage range that allows the input stage to work properly

• a sufficiently low output resistance, which may be reached by using an additional output stage after the intermediate stage.

Like for the other stages, there are many possibilities for the intermediate stage. A few examples, increasing in complexity, will be discussed here. Assuming these basic configurations, variations on them are easier to understand and design. 210 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

Intermediate stage (not) using a resistor The simplest implementation of a voltage to current converter is a resistor. If a resistor is used as an “intermediate stage”, the result, using an input stage with an NMOS differential pair and a PMOS current mirror, is shown in figure 9.10.

M3 M4

i-id1 d2

1 1 2 I0d1+i 2 I0d2+i +) + load v M M R OUT VDD + 1 2 -) - vd - + I 0 VBIAS -

Figure 9.10: Differential stage with current mirror

The DC voltage source with value VBIAS is needed to keep M2 and M4 in saturation for small vDIFF . We can state for the output voltage of this circuit:

vOUT =(id1 − id2) · Rload

id1 − id2 = gm · vDIFF

vOUT = gm · vDIFF · Rload (9.12)

The resistance Rload can be applied explicitly, or it could consist of the (parasitic — non ideal) finite output resistances of the MOS transistors. If only the output resis- tances of the transistors in the mirror are used, the small signal equivalent circuit of the circuit in figure 9.10 is shown in figure 9.11. The small signal output voltage, for a symmetrically structured circuit, is: 1 vout = · (−gm,diff.pair · vgs2 − gm,m · vgsm) gom 1 1 vgsm = // ·−gm,diff.pair · vgs1 gm,m gom 1 vgs1 = −vgs2 = vdiff 2 1 1 1 vgsm = − vdiff // · gm,diff.pair 2 sm gom vdiff 2gm,m + gom vout = · gm,diff.pair · 2 · gom gm,m + gom v g H = out ≈ m,diff.pair (9.13) vdiff gom If the output resistances of the transistors in the differential pair are also included, there is more work to do. Note that (9.12) and (9.13) are practically identical. 9.4. INTERMEDIATE STAGES 211

gsmr

v - v

11. .

mv gsml m gom gsm g

m m om

g + g

id1 id2 +)

vOUT + gs1 1 1 gs2 +

v v

. . diff v go go v

v gs1 m m gs2 -)

g - g -

Figure 9.11: Small signal equivalent circuit of the circuit in figure 9.10, MOS output resis- tances (only those in the mirror are considered).

Intermediate stage using a CEC or CSC A somewhat more complex intermediate stage could consist of an active element, for example a transistor. In this way, a circuit could be made like in figure 9.12. In this circuit, the current flowing from the combination of differential pair and current mirror is used to directly control a PNP in a CEC configuration.

mirror IOUT,VP

TT

vOUT

vDIFF

ITAIL IX differential pair

Figure 9.12: Input stage with a PNP intermediate stage.

Performing calculations on this circuit can be done in several ways:

• by making a (big) small signal equivalent circuit for the whole circuit, and per- forming calculations on it. This is a lot of work, because it will consist of many components.

• the brute fordce method: splitting the circuit into smaller subcircuits, performing calculations on those, and combining them.

• examining the circuit as-is (not recommended).

Below, the second option is taken; figure 9.12 shows the subdivision into three different 212 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS subcircuits. A few relevant statements can be made, either deriving them or referring to a derivation: • As long as it is uncertain whether the output conductivities of transistors in the differential pair are relevant, they are ignored. The same goes for asymmetry: if nothing is stated about it, it can be assumed implicitly that the pair is symmet- rical. This leads to the following small signal transfer function for a differential pair:

idl − idr = gm · vdiff

• The transfer function of a symmetric current mirror, without output conduc- tivities and using MOS transistors, is 1, and for BJTs it is almost 1:

iout = iin (MOS) α i = i (BJT) out in α +2 For the current mirror using MOS transistors, when disregarding the output con- ductivity, this leads to the following output current:

iout,vp = gm · vdiff

It is a very good approximation to model the output conductivity of the transis- tors as a (small signal) load resistance at the output of the current mirror, see (9.12) and (9.13).

• The intermediate stage consisting of a CEC converts the input current into an output voltage. There are several equivalent ways to calculate the transfer function. The (small signal) input voltage can be calculated. This, together with the voltage gain, leads to the transfer function of the intermediate stage. Another way is to calculate the (small signal) output current, which causes the output voltage in combination with the (small signal) output resistance. Through Ohm’s Law, both approaches are identical. The last one yields:

vout = rout · α · iin μ rout = gm μ · α vout = · iin gm The output resistance of the PNP is used, because it is dominant over all other resistances at the output: rix →∞. In absence of this output resistance, the transfer function of the intermediate stage is ∞. In this way, a small signal transfer function was calculated for these 3 subcircuits, which together form the total transfer function of the circuit in figure 9.12. The result- ing small signal equivalent is given in figure 9.13. So the total small signal transfer function is

vout μPNP · αPNP H = = gm,diff.pair · vdiff sPNP 9.4. INTERMEDIATE STAGES 213

1:1

be

a v

. 1

g m g mirror m o

g

IOUT,VP gm/2 -gm /2

vOUT vDIFF

ITAIL IX differential pair

Figure 9.13: Small signal equivalent-like schematic for the circuit in figure 9.12. where the various small signal parameters (of course) depend on the large signal bias. Obviously, there are many more interesting things to derive from this circuit, like rin, ruit and the particular value of VDIFF for which the output voltage is halfway between the supply voltages. These are all useful, and it is also useful to derive these yourself. 214 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

Intermediate stage (more complex) Op amp circuits with more complex intermediate stages are also quite easy to examine, as long as the circuit is first split into bite-size subcircuits. Figure 9.14 is an example of a slightly more complicated intermediate stage (together with a differential pair and current source). There are many ways to calculate the transfer function of this intermediate stage:

• iout,vp → vA → vB → vout this is a way, that requires the calculation of the input resistances and transfer functions of amplification stages. This is possible, but quite a lot of work.

• iout,vp = ib−T 1 → ic−T 1 = ib−T 2 → ic−T 2 → vout this is probably the fastest way. The result, of course, is identical to all other (good) methods, but because current transfer functions are used, we don’t have to consider input resistances. In this case, that leads to less work.

• at least 10 other ways could be found...

Note that in the first case, every stage (in the intermediate stage) is regarded as a voltage amplifier, while in the second case they are regarded as current amplifiers. This choice is arbitrary; when including the relations for input and output resistances, they yield identical results. One way is just a bit more convenient than the other.

IX IY ITAIL

vDIFF vOUT

B differential iOUT,VP T2 pair T1 A

mirror

Figure 9.14: An op amp with a more complex intermediate stage.

The small signal voltage gain of this circuit is something like10: vout gm,diff.pair αmirror μT 2 H = = · 1+ · αT 1 · αT 2 · vin 2 αmirror +2 sT 2

10This answer is ± correct: if a positive and negative input were defined, it would be easy to derive whether a minus sign should be included. 9.5. OUTPUT STAGES 215

9.5 Output stages

For the op amp circuits discussed so far, the output impedance is quite high. For the circuits in §9.3, the output signal is a current: therefore, the output impedance is very high11, so these circuits don’t have voltage outputs for most loads. The same thing basically holds for the intermediate stages discussed in §9.4. For a lot of applications, the output impedance of the op amp circuits in §9.3 and §9.4 is too high, because the load impedance is relatively low12. In these cases, an extra amplification stage is needed to lower the output impedance; because this stage is the last in the amplification chain, it is usually called an output stage. The use of an output stage, which constitutes a low output impedance, is of course not a new concept: in §5.2 this has been dealt with extensively.

9.5.1 Requirements for the output stage The most important requirements for an output stage are:

• the output impedance (or resistance) must be low. The exact definition of “low” depends on the application, but its resistance should at least be much lower than the output resistance of the intermediate stage.

• the input impedance of the output stage must be much higher than the output impedance, because if not, this would defeat the purpose of this stage.

• the output stage must not use a lot of power, in order to prevent burning your fingers, destroying your amplifier or being evicted due to high electricity bills.

In §9.5.2, the first two requirements are dealt with. Afterwards, in §9.5.4, some atten- tion goes into efficiency aspects of various types of output stages.

11An output current can, of course, be regarded as an output voltage with a large impedance. However, it is a common standard to call signals for which the impedance level is “high” “currents”, and signals for which it is “low” “voltages”. The definitions of “high” and “low” are dependent on the application. According to Ohm’s Law, what you call it is arbitrary. 12In systems with feedback, the output impedance can be a factor (1 + Aβ) lower than in the open loop configuration, but even then the output impedance is often too high to connect to a reasonable load. 216 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

9.5.2 Simple output stages In §5.1.5, various aspects of the 3 basic configurations using BJTs have been summa- rized; in table 5.1, the small signal parameters of the CEC, CCC and CGC are given. The summary for corresponding MOS variants are in 5.1.5. From the summaries, it is clear that especially the CCC — a.k.a. the emitter follower — and the CDC — a.k.a. the source follower — are good candidates for output stages: these circuits have a high input impedance and a low output impedance. These circuits will be examined for use as an output stage in op amp circuits.

CDC output stage One of the simplest output stages for op amps is the source follower with tail current source; a schematic of such an op amp circuit is given in figure 9.15. Note that in order to examine the circuit, it has been split into 4 parts: a differential pair, a current mirror, an intermediate stage and the CDC output stage.

mirror iOUT, VP T A T

B v DIFF +)

vOUT ITAIL IX IX -)

differential pair intermediate stage

Figure 9.15: An op amp with a CDC output stage.

The small signal gain of this op amp is easy to calculate, by determining and multi- plying the transfer functions of each subcircuit. Be sure to include the effects of input and output impedances when doing this! The result, for the circuit in figure 9.15, is: v H = out vdiff iout,V P = gm,diff.pair · vdiff μTT vB = αTT · · iout,V P gm,T T vout = vB μTT H = gm,diff.pair · αTT · gm,T T It’s not very hard when the circuit is split up first, but a lot of work if no splitting is done. 9.5. OUTPUT STAGES 217

For the calculation of the small signal output resistance, you could use the brute force method or you could use a smarter method. An output resistance can be calculated in multiple ways, see §4.3.2. Here, the method is used which results in rout = vout/iout, in which the only significant source is at the output, and all other independent sources have value 0. Inspection of the circuit in figure 9.15 shows that iout,V P =0, while the output impedance left of point A is quite high: ∞ Ω. So for calculating rout,itis sufficient to use the circuit in figure 9.16.

iOUT, VP i T dl A T

gs

+ v B . vgs m i

g OUT vDIFF - +) +) m vOUT TT v I I I OUT TAIL X X gmTT -) -)

intermediate stage

a) b)

Figure 9.16: Circuit from figure 9.15: a) with relevant aspects for calculating r out, b) matching small signal equivalent.

Note that the output resistance of the intermediate stage is necessary to get an output resistance of 1/gm: if the output stage were connected to something like a current source, you would have a much higher output impedance! You could easily calculate that the output impedance of a source or emitter follower is only nice and low (1/gm), if the output impedance of the circuit connected to the input is much lower than the fol- lower’s input impedance. Something similar holds for output stages that use different active components. Note that the required “relatively low source impedance” is easier to meet with MOS transistors than with BJTs. At least for low frequencies; capacitors have the unfortunate habit of becoming low impedant for high frequencies. If the output stage is connected, at its input, to a relatively low impedant source, the small signal output resistance for the op amp will be 1/gm for a CDC, or a factor 1+1/α lower for a CCC. This value, however, is strongly dependent on the current through the transistor. If the load and the output voltage fluctuation is large enough to significantly alter the current through the transistor, the small signal approximation is no longer valid.

For a CCC output stage with a stationary current of 1 mA (which is given a low impedant signal), the output resistance will vary strongly, depending on the output current requested: kT rout = ≈ 25 Ω (small signal, ic << IC ) q · IC rout ∈ [17 Ω, 50 Ω] |ic|≤0.5 mA

rout ∈ [12.5Ω, ∞ Ω] |ic|≤1 mA 218 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

If a load of 1 kΩ is to be connected, that 25 Ω seems pretty low at first, but with a voltage amplitude of about 500mV, the output impedance already fluctuates signifi- cantly. For an output signal amplitude of only 1V, the fluctuation in output impedance is already [12.5Ω, ∞ Ω]...

The interesting thing about this, is that there is a simple relation between output resis- tance and total current through the output transistor:

iC > rout,ssec

iC >IC ⇔ rout

From this, two methods arise to make sure the output resistance remains low for rela- tively large output currents:

• use a very large bias current, so that every permittable output current causes a small relative change of current through the output transistor. It is clear that this solution mainly costs a lot of power.

• change the circuit in such a way, that the current through transistors can never (or not easily) get very small. Actually, this can only be done by letting the tran- sistors deliver a signal current which increases the total current. If you consider that a signal can go in 2 directions, you will find that 2 transistors with comple- mentary behaviour are needed... Leading to a slightly more complicated output stage. 9.5. OUTPUT STAGES 219

9.5.3 Slightly less simple output stages As discussed in §9.5.2, simple output stages like a CDC or a CCC (source or emitter followers) result in the wanted behaviour for the output stage, as long as the output current is small when compared to the bias current through the CDC or CCC. An efficient solution for this is to use 2 transistors in CDC or CCC configuration, which each provide for a different part of the output current. A first implementation for this is given below:

VDD

IY

mirror iOUT, VP T A T M1 iOUT +) vDIFF B IX

vOUT M2 -) ITAIL IX

differential pair intermediate stage -VDD An op amp circuit with a “symmetric” output stage.

The output resistance of this circuit is equal to the parallel value of the output resistances of the PMOS CDC stage and the NMOS CDC stage: 1 1 rout,totaal = // gm,M1 gm,M2 The of M1 and M2 are, again, functions of the large signal current through the transis- 13 tors. In the circuit given, if transistors are used which are “off” at vGS =0 , a current with magnitude |IX − IY | will run through one transistor, while no current runs through the other. Because the transcon- ductance gm, for a MOS transistor, is strongly dependent on the current, the output impedance of this output stage will still vary strongly. This is illustrated in the next figure, where the left hand graph shows the (large signal) output current of the above circuit, as a function of the voltage between input and output vB −vOUT of the output stage. For this plot, regular MOS transistors were assumed (meaning they are “off” when vGS =0). Implicitly, it has been assumed for this figure that IX = IY ; it’s good practice to reason for yourself what the effect of IX = IY would be on the graph.

OUT

i

0 (logarithmic)

OUT

r

-UT 00UT -UT UT

v-vOUT B v-vOUT B a) large signal output current of the above op amp circuit and b) small signal output impedance of the same circuit as function of vB − vOUT

The right hand graph shows the small signal output resistance of the output stage, on a logarithmic scale.

It is clear to see that this rout varies from a low value (when a current runs through one of the transistors) to ∞ (if they are both “off”). This useless behaviour is caused by the possibility of both output transistors being simultaneously “off”. 220 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

The solution

The solution for the strongly fluctuating output resistance, as was just discussed, is to avoid the situation that both transistors are simultaneously “off”. This can be done in a way similar to that in chapter 3: using (for example) a DC voltage source between the gates of M1 and M2. The resulting schematic is given below.

VDD

mirror iOUT,VP T A T M1

B iOUT +) VDC vDIFF

vOUT M2 -) ITAIL IX

differential pair intermediate stage -VDD

The previous op amp circuit, now with a “symmetric” output stage and bias source VDC.

The value of VDC can now be used to control the occurrence of both transistors being “on” simulta- neously. At VDC =0, of course, we get the situation that was shown before in §9.5.2, in which an increasing VDC decreases the occurrence of iOUT =0. In the next figure, the output current and the small signal output resistance are given for three values of VDC.

• For VDC =0, there is a large area for which iOUT =0and rout →∞, which yields the same curves as those for the first op amp in §9.5.2. In the figure below, the a curves correspond to this situation.

• For VDC = VT,M1 +VT,M2, there is only one point for which iOUT =0and rout →∞: the fur- ther the deflection at the output stage, the better (lower) the output resistance. The corresponding curves below are marked b.

• The remaining curve corresponds to VDC >VT,M1 + VT,M2 and a current flows always through either one of the transistors. This gives a much nicer iOUT −(vOUT −vB) relation, and therefore a nicer small signal output resistance curve, see the curves marked c in the graphs below.

OUT

i

0 (logarithmic) a

OUT

r b c a b c

-VT 00VT -VT VT

a)vOUT-vB b) vOUT-vB Large signal output current of the above circuit, as a function of vB − vOUT , for 3 values of VDC: VDC,a =0for a, VDC,b = |VT,M1| + |VT,M2| for b while VDC,c >VDC,b for c.

13 Most MOS transistors are enhancement transistors, which are off when vGS =0, or if they are far into weak inversion. In any case, virtually no current runs for vGS =0. 9.5. OUTPUT STAGES 221

Implementation examples

In real circuits, the DC voltage source is of course not ideal, but made with real electronic components. Because the current through the output stage is quite dependent on the value of VDC and of properties of the transistor, and because these properties are often temperature-dependent, the DC source VDC is often realized with transistors that behave similarly to the output stage transistors. Below, a few output stages, with different realizations for VDC, are shown. The realizations in b) and c) in the figure below are straightforward implementations of the DC source using compensation for temperature and production offsets. Because the exponential behaviour of an NPN and of a PNP are virtually identical, the circuit in c) can also be implemented as in d). A nice implementation, where VDC = n ∗ VBE with n ≥ 1 can be realized with a BJT, is shown in e). If, for simplicity, we disregard the base current through Q”, it is easily derived that VDC ≡ VCE =1/β · VBE, with β = R2/(R1 + R2).

VDD VDD

IOUT,VP IOUT,VP T T AAT T BB M1 M1 i i OUT M1' OUT +) +) VDC v v OUT M2' OUT M2-) M2 -)

IX -VDD IX -VDD

intermediate stage intermediate stage a) b)

VDD VDD VDD

IOUT,VP IOUT,VP IOUT,VP T T T A T A T A T B B B Q1 Q1 Q1 i i i Q1' OUT Q1' OUT R OUT +) +) 1 +) Q' v v v Q2' OUT Q2' OUT OUT R2 Q2 -) Q2 -) -) Q2

IX -VDD IX -VDD IX

intermediate stage intermediate stage intermediate stage -VDD c) d) e)

Output stages with implementations for VDC; a possible intermediate stage is included for clarity. Calculation example for circuit c) in above figure Of course, nice calculations can be done on the large signal transfer function and the output resistance of all these circuits. Because these are (mostly) large signal calculations, this can be a lot of work. A few calculations are given below. Large signal transfer function The large signal transfer function is (as always) the easiest to calculate if there is some level of symmetry, and if non-relevant aspects are disregarded. For the derivations below, the base currents are disregarded, transistors are assumed equal and the input signal is applied at the intersection of the two collectors of Q1 and Q2. Furthermore, an output load is assumed, because otherwise the derivations would be a bit uninformative, and because otherwise, an output stage would not be necessary. The derivation given below uses an approach similar to the brute force method. This method works fine for linear and often for nonlinear calculations and also in this case it yields a correct answer. However, the answer is, sadly, not very useful. 222 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

vOUT = iOUT · Rload

iOUT = iC−Q1 − iC−Q2 ( + 1 − )· vIN 2 VDC vOUT q iC−Q1 = IC0 · e kT (− + 1 + )· vIN 2 VDC vOUT q i − 2 = I 0 · e kT C Q C   · ( − ) (− + ) q VDC q vIN vOUT q vIN vOUT 2 iOUT = IC0 · e kT e kT − e kT   · q VDC q · (vIN − vOUT ) v = R · I 0 · e 2kT · sinh (9.14) OUT load C kT Equation (9.14), then, yields an annoying equation for the output voltage, as a function of (among others) itself. This could probably be solved using Lambert-W-functions, but this is not what we want. Is that all, then? No, we try again with a trick up our sleeves. In the loop consisting of the vBE values of the four transistors in the output stage, it holds that

vBE−Q1 + vBE−Q2 = vBE−Q1 + vBE−Q2

With mathematical rules for exponentials, a useful relation follows for the various collector currents in the output stage (if we correctly disregard a few minus signs):

iC−Q1 · iC−Q2 = iC−Q1 · iC−Q2 ⇔

iC−Q1 · iC−Q2 iC−Q2 = iC−Q1 Using this relation yields:

vOUT = iOUT · Rload

iOUT = iC−Q1 − iC−Q2

iC−Q1 · iC−Q2 iOUT = iC−Q1 − i − 1 C Q 2 2 ± +4 · · vOUT vOUT Rload iC−Q1 iC−Q2 iC−Q1 = 2 · Rload vIN = vOUT + vBE−Q1 − v − 1  BE Q 2 2 + +4 kT vOUT vOUT RloadiC−Q1 iC−Q2 = vOUT + ln (9.15) q 2 · Rload · iC−Q1

The equation in (9.15) is easier to use than (9.14), because vIN is given explicitly as a function of vOUT . Of course, it is still a nasty equation, because it describes large signal behaviour of a circuit with some nonlinear transfer functions. In the figure below, the large signal transfer function of (9.15) is plotted for 3 values of iC−Q1 ; the load resistance chosen is 100Ω.

1

OUT

u

0

-1

-0,5 0 0,5

uIN [V] 9.5. OUTPUT STAGES 223

Large signal transfer function given by (9.15), for different values of iC−Q1 , where Rload = 100 Ω.

The outer curve corresponds to iC−Q1 = 100 pA, which results in a small VDC and a reasonably large increase in vBE to deliver the required output current. Because of this, the large signal transfer function is pretty crooked for small outputs. The two other curves are for iC−Q1 = 100 nA and iC−Q1 =10μA, respectively: both currents are a lot smaller than the output current of (in this case) maximally 10mA.

Output resistance The (small signal) output resistance of the output stage is easily calculated, by taking the derivative of (9.15); make sure to correctly incorporate the calculated load resistance.

9.5.4 Power efficiency aspects of output stages

Output stages of op amps14 are used on relatively low impedant loads. To do this correctly, relatively high output currents are needed, see for example §9.5.3, which can cause high power consumption in the output stage. In this subsection, the power efficiency of various output stages will be discussed.

Simple output stages: class A

The simplest output stages of op amps are usually emitter follower or source follower circuits, see §9.5.2. An example of such a circuit is given in figure 9.15; the (large signal) output current for these types of circuits is:

iOUT = iD − IX

If we assume that the output current is symmetrical, the highest possible harmonic output current15 is:

iOUT−max = IX · sin(ωt)

The output power and the power delivered by the supply are: T 1 2 POUT = (IX · sin(ωt)) Rload dt T 0 2 1 2 · Vout = IX Rload = 2 2 · Rload Psupply = VSUPPLY · IX

The maximum value of the output voltage is determined by the supply voltage: if the 1 output signal only just fits into the supply voltage, it holds that Vout = 2 · VSUPPLY.

14Actually, this goes for output stages of all kinds of amplifiers. If we limit ourselves to analog ampli- fiers, these could be audio amps, video amps, power amps for antennas, laser drivers, and a lot more. 15Usually, a nice sinusoidal output current is assumed; you could also assume a square shaped output current, of which the base harmonic is larger than the amplitude of the square wave. This is usually only used for nonsensical things, like measuring or calculating PMPO (peak momentary power output). 224 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

The maximum power efficiency of these kinds of output stages, therefore, is:

POUT ηoutputstage ≡ Psupply V 2 R = out · load 2 · Rload VSUPPLY · Vout 1 ≤ (9.16) 4 From (9.16), it follows that the maximum power efficiency of these kinds of circuits is only 25%. What’s worse: this number can only be reached, if the output signal perfectly fits into the supply voltage. In other words: for maximum efficiency, you need a small supply voltage, and try to squeeze as much signal as possible out of it. With a wider supply voltage or a smaller output signal, the efficiency easily shrinks to much lower values. This low efficiency occurs in all amplifiers where the output signal is made, as described before, with only a single transistor. These types of amplifiers are the oldest and simplest in existence, and are denoted as class A. Strictly speaking, class A stands for an amplifier where the output transistor(s) construct the entire output signal.

Slightly less simple output stages: class B In §9.5.3, a few less simple output stages were discussed, which can deliver reasonably large output currents. This was accomplished by using two transistors, each mainly responsible for a different part of the output signal. A number of implementations for this are shown in §9.5.3. If, for simplicity, we assume that both transistors make up exactly half of the output sine, the output power and the supply power are again easily calculated: T/2 2 2 (Vout · sin(ωt)) POUT = dt T 0 Rload V 2 = out 2 · R load T/2 1 Vout · sin(ωt) Psupply = VSUPPLY · · dt T 0 Rload V · V = SUPPLY out π · Rload Here, as well, the maximum power efficiency is reached at the maximum voltage de- flection:

POUT ηoutputstage ≡ Psupply V 2 π · R = out · load 2 · Rload VSUPPLY · Vout π ≤ (9.17) 4 Where with class A amplifiers, the maximum efficiency is only 25%, that number for these amplifiers is π/4 ≈ 78.5 %. This maximum efficiency, again, only holds when 9.5. OUTPUT STAGES 225 the output voltage only just fits in the supply voltage. Amplifiers where either transistor is responsible for exactly 50% of the output signal are called class B amplifiers.

Slightly less simple output stages: class AB In class B output stages, the two output transistors account for exactly 50% of the output signal. This means that the transistors in the other 50% of the signal do nothing at all: they are “off”. In §9.5.3, a number of disadvantages of these pure class B output stages are mentioned. For example, the (among others) output resistance is bad, when one transistor only just does work while the other only just doesn’t work, see §9.5.3. Seeing that the disadvantages are caused by the existence of a point where both transistors are not or only slightly “on”, the solution is simple: add an overlap in the “on” range of both transistors. The resulting output stage is not class A, and not class B, but something in between: a class AB output stage. The maximum power efficiency of class AB, therefore, is:

1 π ≤ η ≤ (9.18) 4 outputstage 4

Other output stages: class D A different type of output stage is the class D stage. Where class A, AB and B work pretty “neatly”, a class D doesn’t seem nearly as “neat” at first glance: a class D continuously alternates between two distinct values. The total output signal of this amplifier type is the resulting quasi-DC component16 of the switched signal, where all higher components are filtered off.

S1 + filter vX vOUT

vIN - S2

Figure 9.17: Principal schematic of a class D implementation..

If, for the principal schematic in figure 9.1717 the switching frequency (frequency of the triangle) is a lot higher than that of the signal, that last signal can be regarded as quasi-DC. The “on” times of both switches S1 and S2 are linearly dependent on the input signal level, so the quasi-DC component of vX is as well. Then, the output filter filters all high frequencies (like the switching frequency and its harmonics) away, leaving a neat amplified version vOUT of the input signal vIN.

16This “quasi-DC” is a strange term: a signal is either DC or not. It is meant that the signal has a low frequency behaviour when compared to a different signal (here: the switching frequency), and can almost be regarded as DC in that sense. 17This is only one possible realization of a class D amplifier. 226 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

The big advantage of these amplifiers is, that the switches are either “on” with a voltage drop of, ideally, 0V (meaning 0W dissipation), or “off” with (ideally) no dissipation either. So for an ideal class D amplifier, the maximum efficiency is 100%. Dissipation in the control circuitry and switching losses, as well as bad filtering, mean that this maximum efficiency is never reached, but 90% is achievable at maximum output deflection.

Other output stages: class C, E, F, G, H, ... There are many more types of output stages than those discussed just now; a large part of the alphabet has been reserved for them. The class C amplifier is essentially a class A or class B amplifier, where only a part of the input signal is amplified. It yields strongly deformed output signals, like for example those in figure 3.1. These amplifiers have high efficiency (higher than class B), but inherently have lots of distortion. They are suitable for some applications. Class E and class F amplifiers are resonant amplifiers. These are tricky things, which are used as power amplifiers in transmitters, and which you can therefore find in many wireless communication equipment. Class G and H are variations on the ones mentioned before, but using a variable supply voltage. As can be seen, for example, in (9.17), the efficiency is dependent on the ratio between output voltage amplitude and supply voltage. This voltage is fixed for class B, but for classes G and H it is continuously adjusted. 9.6. FREQUENCY DEPENDENCIES 227

9.6 Frequency dependencies

Hopefully, you already know that a combination of resistance and capacitance usually leads to bandwidth limitations. More specifically, parallel capacitances lead to a low pass characteristic, which limits bandwidth. In the simple models we have used so far, parasitic effects have been left out. When using somewhat more realistic models, more effects are included: the results become more complex, but also more accurate. Generally, every transistor will have many parasitic capacitances. Therefore, there will generally be a at every node in the circuit. In §9.6.1, the effect of parasitic capacitances will be analyzed and “solved” for small signals. Then, in §9.6.2, the effect of limited bandwidth on large signal behaviour will be discussed.

9.6.1 Bandwidth limitations: small signal If the parasitic components of transistors are explicitly included in an op amp circuit, you might get something like figure 9.18. We used to have 6 transistors and 3 current sources, which could be divided into subcircuits of at most 3 components. We now have a circuit with 16 capacitors, which makes the analysis a lot more complex, even when dividing the circuit into smaller subcircuits.

T A T

B

v DIFF +)

vOUT -)

ITAIL IX IX

Figure 9.18: An op amp with explicitly displayed parasitic capacitances.

The first step to solving this problem is given in chapter 1: “the simplest model that gives enough accuracy, should be used”. For the frequency behaviour of the circuit, a certain number of poles (cutoff points) is dominant, while the others are hardly rele- vant. It can be argued easily that: • the cutoff points with the lowest frequencies are important, and that • cutoff points at much higher frequencies are mostly irrelevant. In the given circuit — and generally, when dealing with parasitic capacitances — all cutoff points are low pass cutoff points, for which: 1 ω−3dB,node = cnode · rnode 228 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

From this, it follows that (almost) all parasitic capacitances, which are between 2 low impedant points, are not very relevant. With chapter 1 in mind, they can be disregarded immediately. The schematic of figure 9.18 can then be quickly cleaned up to reach figure 9.19. In this slightly cleaner schematic, there are 3 internal nodes, each having their own filter pole. So in principle, this amplifier circuit has a third order low pass characteristic. The exact location of the three cutoffs is still unknown, and not very important yet.

B T A T

C

v DIFF +)

vOUT -)

ITAIL IX IX

Figure 9.19: An op amp with explicit parasitic capacitances: irrelevant ones are left out.

Parasites and stability As long as the amplifier is used in an open loop, meaning without any form of feed- back, the amplifier is stable. With feedback it is a different story: a third order system with large (DC) gain can easily become unstable, see chapter 6. The solution for this was discussed thoroughly in §6.5: make the open loop gain first order dominant. This can be most easily realized by making one pole dominant, by shifting that pole to much lower frequencies. For the circuit in figure 9.19, the approximate locations of the poles are:

gm,mirror • ωA ≈ Cce+Cds+2·Cgs+Cgd

gm,T 1 • ωB ≈ α(Cdg +Cds+Cce+Cbe+Cce·(1−A))

gm • ωC ≈ μ·(Cce+Cgd+Csrc+Cgs)

So in this circuit, the ωB or the ωC is likely at the lowest frequency. Also, the ωB is the easiest to shift to lower frequencies; using the Miller effect (see §6.5.1), an extra capacitance, parallel to Ccb−Tt, is seen a factor (1−A) larger at node “B”. A relatively 18 small capacitance can then be used for a relatively large shift of ωB .

18If amplifiers are built on a breadboard, the size of the capacitance actually doesn’t really matter. If you want, you could just put a 100 μF capacitor between node A and the supply voltage, and the circuit will be first order dominant. If you are making the amplifier on an IC, every pF costs chip space, so it is important to make the amplifier first order dominant in the most efficient way (with the smallest possible capacitance). 9.6. FREQUENCY DEPENDENCIES 229

Adding a Miller capacitance around a voltage amplification stage (often the inter- mediate stage in the amplifier) is the most popular way to make an amplifier’s transfer function first order dominant. The nice thing about this is, that for a first order domi- nant transfer function, the other poles can often be disregarded in calculations. Then, we can replace the circuit in figure 9.19 by the one in 9.20.

B T A T

C CM

v DIFF +)

vOUT -)

ITAIL IX IX

Figure 9.20: An op amp with Miller capacitance for first order dominant behaviour.

Bandwidth limit of the current mirror When using an appropriate Miller capacitance, the total transfer function of the amplifier can be made first order dominant. Although the effect of a pole on node A will not be dominant in that case, it is still discussed shortly below. The figure below shows the small signal equivalent circuit of the input stage with matching current mirror, belonging to (for example) the op amp in figure 9.20. The output current is said to flow to a low impedant node, and the only capacitance here represents all capacitances on the intersection of the gates in the current mirror.

-

gsmr

u

u

. ugsm .

m gsml

m

mirror

m

+ m

g

C

g

A B

idl idr

+ gsl gsr +

u u

. . diff u u

u gsl m m gsr

g - g -

Small signal equivalent circuit for the input stage with current mirror in figure 9.20.

For this circuit, the output current is easily calculated, for example as shown below. Because no polarity is given for the input signal or the output current, I will simply assume something. That means the answer 230 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

is ± correct.

iout = −gm,m · vgsm − idr

vgsm = −idl · (Zsm//ZCmirror) 1 idl = −idr = gm · vdiff  2  1 1 iout = gm · vdiff +1 2 1+jωCmirror/gm,m

From this relation, it follows that for low frequencies, the transfer function is simply iout = gm · vdiff . For frequencies that are much higher than the cutoff frequency at node A, the mirror will no longer mirror the current. Note that this is only for high frequency components: the mirror will mirror low frequency signal components (including DC) just fine. For high frequencies, the transfer function is = 1 · iout 2 gm vdiff : a factor 2 (6dB) lower; in the Bode plot below, the contribution of the current mirror to the total transfer function is displayed.

0 dB0

))

|

w

)

w

(H(j

|H(j

arg -6 dB

-p/2 w w 10w w w 10w 0/10 0 0 0/10 0 0 w (rad/s] log w (rad/s] log

Bode plot of the transfer function of the current mirror.

Parasitic capacitance parallel to the input tail current source. A parasite to which no attention has been given thus far, is the parallel capacitance of the tail current source of the input stage. As discussed in §9.2.5, the output resistance of the tail current source has an effect on the rejection factor (or the CMRR) of a differential amplifier. The effect of an extra parallel capacitance is not much different: it is a parallel impedance for the current source. The effect that this has is that the rejection factor (and therefore the CMRR) of the amplifier gets worse for higher common signal frequencies. 9.6. FREQUENCY DEPENDENCIES 231

9.6.2 Bandwidth limitations: large signal Bandwidth limitations have different effects on a large or small signal level. For small signals, it just shows as some kind of low pass filter, which is a nice linear effect. For large signals, however, it can also cause nonlinear effects. A description is in this paragraph, using a (generally valid) example. If, for simplicity, an amplifier with dominant first order behaviour is assumed, the open loop transfer function is (approximately) given by

A0 A(jω)= 1+jωτ1 If this amplifier is used in a configuration with negative feedback — as is often true for op amps in linear applications — the input signal of the op amp is ideally: v v = IN IN,opamp 1+A(jω)β V V = in (9.19) in,opamp 1+A(jω)β Equation (9.19) shows, that the amplitude of the input signal of the op amp is a function of the magnitude of the output signal, as well as A(jω)β. As can be derived, and as can be seen from the curve in figure 9.5, for an input voltage higher than a certain value, an op amp will not be able to put even more effort into changing its output signal: the differential pair is “stuck”. From figure 9.20 it can be derived, that for such op amp circuits: ≤ ⇒ Iout,inputstage  ITAIL   ∂vCM  ITAIL   ≤ ⇒  ∂t  CM   ∂vOUT  ITAIL   ≤ ACMnode→output · ∂t CM In words: the maximum change of the output voltage of the amplifier is limited by (in this case) the value of the Miller capacitance and the tail current of the input stage. Note that if there are extra amplification stages, these will also have an effect on this equation. The maximum slope of the output voltage is called the slew rate (SR), and has been seen already in §7.4.2.     ∂vOUT  Slew rate ≡   ∂t maximum An amplifier running into these kinds of slew rate limitations is usually said to be “slewing”.

If, for example, a sinusoidal signal needs to be amplified, a limited SR can lead to a limited frequency and/or amplitude of the output signal. For an output sine vOUT = Vout · sin(ωt), the required slew rate for a neat amplification is:

SRrequired = ω · Vout 232 CHAPTER 9. BASIC INTERNAL CIRCUITS FOR OP AMPS

If the required SR is higher than the slew rate of the amplifier, a part of the sine wave will not be made neatly, see the figure below.

OUT

u

t [s] The effect of a too small slew rate for the signal to be processed: slewing.

An effect closely related to slewing and slew rate limitations is “full-power band- width”. This full-power bandwidth is the maximum signal frequency for which the amplifier can still be deflected maximally, without slewing. It can be easily derived that maximum deflection can be reached until: SR SR ωfp = = vout−max VBB if the supply voltage is symmetric with value ±VBB. Chapter 10

Introduction to RF electronics

10.1 Introduction

Radio frequency circuitry (RF-circuits) are different from normal (low-frequency) cir- cuits. One hint in this direction could be the radio within the term itself. With this term, we do not only aim at an AM-radio, FM-radio or such, but at an arbitrary circuit or system that can transmit or receive RF signals. All electronic circuits which have some form of wireless communication are RF-circuits. It does not matter whether it is AM, FM, PM, Bluetooth, IEEE802.11, GSM, 4G or something else: these are merely the protocols. Transmitting and receiving data is something completely different from what we have covered so far. The laws of Ohm and Kirchoff could (almost) always be applied within this book. However, these laws do not allow us to transmit data wirelessly: there is no wire between transmitter1 and receiver, meaning that there is no current loop or voltage mesh. Hence, it would be impossible to transmit information. Compare it with cutting the cord of your iPod earplugs: the current loop is broken, hence there is no more sound2. There are exceptions, for example a transformer or a capacitor. A transformer con- sists of two coupled inductors. One of these inductors is driven by an (AC) input signal and generates a (AC) magnetic field, while the other inductor transforms this field back into an (AC) output signal. The overall result is a fixed ratio between the current and voltage at the primary and secondary side of the transformer, without any wire be- tween the primary and secondary side... A capacitor does not have a true electrical path between the two plates either: there is an insulating layer between the plates. In a capacitor the conduction takes place via charge storage at the plates, as a response to an electric field between the plates of the capacitor. The (AC) current through the capacitor occurs only if an AC voltage is applied. Both effects in the transformer and in the capacitor are a lot like transmitting — transfer of electric signals without a closed conductive path — and are obviously re- lated to actual transmitting. However, the subtle difference is that in a capacitor and in a transformer the transmitter and receiver are very closely spaced: it’s the other plate in a capacitor or the coupled inductor in a transformer. With a radio system, you are

1Two wires, actually. 2No sound from the earplugs of course, otherwise it would be the ultimate form of active noise control.

233 234 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS transmitting power, whether or not it is absorbed by any receiver at any distance. More on this later. The main difference between low-frequency electronics and radio-frequency (RF) electronics is that at low frequencies the voltage law, current law and such are true while at RF the propagation speed of the signals and the theory of relativity are im- portant3. This is comparable with the fact that Newtonian (mechanical) laws apply for objects at low speeds, but do not apply at very high speeds: relativity starts to come in to play. You wouldn’t really notice the effects of relativity in mechanics, but you will in electronics.

10.2 Transmitting and receiving

Figure 10.1 shows a basic transmitting/receiving system. At the transmitting side, the signal is amplified with a power amplifier (PA) and fed into an antenna. According to the voltage law, the current law and Ohm’s law this should yield a zero current in the antenna as there is no closed loop at the output of the PA including the antenna. As a result the power going into the antenna should be — from a low frequency point-of- view — zero and hence no power would be transmitted. After reading this chapter, you’ll know better: the antenna generates an (AC) electromagnetic wave from its (AC) input voltage and thereby converts electrical energy into electromagnetic energy. In the electrical domain the antenna then is an impedance: an electrical element that may store electrical energy but also converts electrical energy into energy in another domain.

Pout

Pin

VDD VDD

PA LNA

vin vout

Figure 10.1: A transmit-receive system: a signal is transmitted wirelessly

A proper transmitting antenna — which can transmit on a specific frequency and/or in a specific direction — can also receive at the same frequency from the same direction. At the receiver side, the receiving antenna4 transforms the electromagnetic wave back

3Well, actually not the theory of relativity itself, otherwise we wouldn’t have been able to transmit signals before 1905. However, the theory does neatly describe the effects leading to RF systems. 4The difference between transmitting and receiving antennae is that the transmitting antenna is con- nected to a transmitter, and the receiving antenna is connected to a receiver... In terms of signal levels: the transmitting antenna usually processes large signals and hence high power levels that it must be able to handle without e.g. evaporating. The receive antenna typically handles only weak input signals, and then the demands on power handling capability are absent. In general, a good transmitting antenna is also a good receiving antenna, and vice versa as long as the antenna can handle the associated power levels. 10.2. TRANSMITTING AND RECEIVING 235 to an input voltage, from which the original vin can be obtained. The received power is related to the transmitted power as described by the Friis equation:

2 λ P = P · G · G · receiver transmitter transmitter receiver 4πR with λ the wavelength of the EM-wave R the distance between the transmitter and receiver antennae

The factors Gtransmitter and Greceiver are the gain factors of the antenna in the di- rection of the other antenna. We will not cover this in great detail: only §10.7 covers a number of antenna characteristics such as gain and directivity. We simply assume that the antenna is not direction sensitive; then Greceiver = Gtransmitter =1. Hence, within this book, we use:

2 λ P ≈ P · (10.1) receiver transmitter 4πR

The relation above already shows that for large distances between transmitter and re- ceiver — which is usually the case — the received power is quite a bit smaller than the transmitted power. In this chapter the focus is mainly on getting a high transmitting power. Since the transmitting and receiving antennae are assumed to be identical, this should also give us a high(er) receiving power. For a high transmitting power, we need:

• a large output voltage from our PA. This has already been covered in this book. For low-frequency amplifiers — up to a few MHz — we can use op-amps; for higher frequencies we need single- transistor circuits.

• to get the output voltage of the PA at the feed point of the antenna. This is fairly straightforward for low frequencies: the amplifier’s output voltage can simply be routed to the antenna. However, for RF frequencies this is a bit more complicated as explained briefly in §10.3. No in-depth analysis is given in this book.

• the antenna to efficiently transform the input voltage at the feed point into trans- mitted power. After a brief introduction to antennae in §10.4, we will discuss the dipole antenna in §10.5 and monopole antenna in §10.6. Other types of antennae are not covered in this book.

An RF-system usually transmits a (modulated) sine wave. As you will see further on in this chapter, the antenna can — from an electronics point-of-view — be modelled as an impedance Zantenna = Rantenna + jXantenna. In this, the real part Rantenna models the conversion of electrical energy into (here) radiated electromagnetic radiation. The 236 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS imaginary part jXantenna models the energy storage around the antenna which is very much the same as energy storage in capacitors and inductors. In conventional circuit theory, energy storage in an element gives rise to reactive power. From this it follows that: ≡ Ptransmit Pantenna,real = PRantenna ≡ Preactive Pantenna,imag = PXantenna Using conventional network theory, it can be derived that the transmitted power and reactive power are given by: V · I P = · cos(θ − θ ) transmit 2 v i V · I P = · sin(θ − θ ) reactive 2 v i Where V and I represent the voltage and current amplitudes; the factor 2 is introduced because of the ratio between effective value and magnitude for a sinusoidal signal. Its usually easier not to work with the relations above, but with 2 · Ptransmit = Ieff Re(Zantenna) where Ieff can usually be found from an expression including the voltage applied to the feed point of the antennae and the total antenna-impedance. Using Ohm’s law this yields: V I = Zantenna |V | |I| = |Zantenna| ...

This allows for easy calculation of the (real) transmitted power, once the effective voltage (or amplitude or ...) on the feed point of the antenna is known. For example, for an voltage amplitude V applied to the antenna: 2 V · Ptransmit = 2 Re(Zantenna) 2 ·|Zantenna| This book pays little attention to modulation techniques. Whether we use an AM- signal5,FM6, PM or a digital equivalent ASK, FSK, PSK — possibly multiplexed in time or frequency — or something whatever, it does not matter for this book.

5 AM means Amplitude Modulation, where the amplitude of a carrier wave with frequency ωcarrier is modulated with the transmitted signal. AM-signals can easily be detected with an envelope detector, which can be done rather well if the envelope detector works only for the modulation frequency. This often goes wrong, resulting in poor detection. 6FM is short for Frequency Modulation; the information then is encapsulated in the frequency de- viation of the signal. Creating FM is not very difficult if you use an oscillator with an electronically controllable oscillation frequency that hence creates fosc(vin). Detecting FM is more complicated; one way is reusing the transmitter’s oscillator in the feedback path around a stage that has a transfer function vout(Δf) with high gain. The system transfer then is — in terms of A and β approximately 1/β leading −1 to vout ≈ fosc(fin) 10.3. MAXWELL 237

10.3 Maxwell

The laws of Maxwell relate the electric field to charge, current and the magnetic field:

∂H rot E = −μ ∂t ∂E rot H = J +  (10.2) ∂t ρ div E =  div H =0

Here, E is the electric field, H the magnetic field and rot and div are the well known operators for vector calculus: the rotation and divergence7. As the names of these operations already suggest, these operators calculate how much a vector field rotates or changes. We will not go in to these operations and equations: we will work towards a — within the context of this book — usable result. The magnetic field is related to currents and voltages through relativity, meaning that the constants  and μ are related to the speed of light c: 1 c = √ μ00 It follows from (10.2) that a change in E-field causes a change in H-field, and vice versa. From the vector operations, it also follows that the E-field caused by a time- varying H-field is perpendicular to that H-field. The same holds for an H-field caused by a time-varying E-field: this E is also perpendicular to H. It now can be derived that the power density of the E and H fields (called an EM-field) is given by the so-called Poynting vector, which is perpendicular to the E and H fields8:

S = E × H [W/m2] (10.3)

10.3.1 Maxwell and Kirchhoff The Maxwell equations are an expansion of the voltage and current laws of Kirchoff. This can also be seen from the equations themselves: for any mesh using the voltage law:  ΔVmesh =0 mesh

7The term curl is mostly used within the US, instead of rotation. The rot-operation gives how and how much a vector field rotates in space; the div-operation takes the derivative of a vector field. In ( ) ≡∇× ( ) ≡∇· =[ ] mathematical form: rot F F , div F F and in 3-dimensions with F FxFyFz :

exeyez ∂ ∂ ∂ ∂F ∂Fy ∂F ∇×F = ∇·F = x + + z ∂x ∂y ∂z ∂x ∂y ∂z FxFyFz where ex, ey en ez are unit vectors in x, y and z direction. Tricky, indeed, which is why I explain as less as possible in this book. 8The “×”-operator means the vector product, giving a vector which is perpendicular to the original two vectors: something with the right-hand rule. To transmit energy with an EM-field, we need a 3- dimensional world. Therefore, a 2-dimensional world would be completely dark, since light is a special form of an EM-wave. 238 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

For the same mesh, a similar equation can be written down in terms of electric field strength E. The summation then becomes a contour integral — an integral over a closed contour — and results in: Edl =0 C Φ − ∂H − ∂ B The integral form of rot E = μ ∂t is given by C Edl= ∂t . Here, ΦB is the total magnetic flux through the surface that is enclosed by the contour. Trying to equate the Maxwell relation equal to the Kirchhoff voltage law relation reveals that:

the voltage law of Kirchhoff is true if the total magnetic flux through the voltage mesh does not change in time. Hence, to have the exact same results from kirchhoff and Maxwell, magnetic flux is perfectly fine as long as its change is zero. As a good approximation, the volt- age law applies sufficiently well if the total magnetic flux through the surface of the mesh per unit time barely changes. This can be accomplished using either (physically) small meshes or low frequencies, or both.

We can derive something similar for Kirchhoff’s current law. If we take the divergence of Amp´ere’s law — the Maxwell equation for rot H — then we have9 for any 3- dimensional vector: ∂E div(rot (H)) ≡ 0=divJ +  div ⇔ ∂t dρ div J = − dt This relation states that the change in current (density) in a certain volume is due to the accumulation of charge within that volume. This accumulation happens for every current or voltage change, since charge cannot leave infinitely fast from that volume.Trying to equate the KCL to the Maxwell result above, it follows that:

the current law of Kirchhoff is true if the total charge within a certain volume does not change. The current law is from a fundamental point-of-view hence only applicable for DC: since any signal moves at a finite speed any change in current or voltage will not be instantaneous, resulting in a short accumulation of charge. As approxima- tion, the KCL may be used if the physical dimensions of the node in question are so small that the time needed for the EM-wave to pass the node is much shorter than one period of the signal. This can be accomplished using either (physically) small nodes or low frequencies, or both.

9Using div(rot(V )) = 0 10.3. MAXWELL 239

In all previous chapters in this book, the analyses were based on Kirchhoff’s volt- age and current law. This implicitly means that the signal frequencies must be low enough: low enough for the wavelength of an EM-wave c/f to be much larger than the physical dimensions of the circuit. For an audio amplifier, which has to operate up to 20 kHz, these assumptions are true if the amplifier is much smaller than 15 km, which is usually satisfied. For a GSM in the 1.8 GHz band however, this already be- comes a problem. In this case, the entire circuit must be much smaller than 15 cm to be able to use the current and voltage laws. The internal IC’s within the GSM typically are much smaller than this 15cm and then can be designed and analyzed using the KVL and KCL, but as soon as you connect these IC’s with something at the outside — a package, matching network or antenna — then the distances increase to such an extend that the laws of Kirchhoff are not applicable anymore. A basic rule? Well alright: in general, you may use the Kirchhoff laws if the dimensions of the circuit are smaller than λ/10. 240 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

10.4 Introduction to antennae

An antenna is driven at its feed point by a voltage and transmits an electromagnetic (EM) wave; this course does not analyze the physics behind antennae in detail. From a circuit or system perspective it is only important that the antenna does transmit or receive, and that you can model the antenna behavior by an impedance Zantenna = Rantenna + jXantenna. Just as for any impedance, the real part of the impedance transform the electrical energy into energy in some other domain. In an ordinary resistor the power lost in the resistive component is transferred into heat; in an antenna it is transmitted. The following paper gives a nice introduction into the physics behind antennas. 10.4. INTRODUCTION TO ANTENNAE 241

designfeature By Ron Schmitt, Sensor Research and Development Corp

LIKE MOST EEs, YOU PROBABLY WISH YOU HAD A BETTER UNDERSTANDING OF ELECTROMAGNETIC FIELDS AND WAVES. MAYBE THE COMPLEX MATH KEEPS YOU FROM DELVING MORE DEEPLY INTO THE SUBJECT. THIS INTUITIVE TREATMENT GOES LIGHT ON MATH. IN SO DOING, IT BRINGS LIFE TO A TOPIC THAT MANY FIND DRY AND CONFUSING. Understanding electromagnetic fields and antenna radiation takes (almost) no math nderstanding antennas and electromagnet- fend laws without regard to the truth. Even without ic fields is obviously important in RF engi- my device, the stray electromagnetic energy from the Uneering, in which capturing and propagating power lines is radiated away and lost, so I might as waves are primary objectives. An understanding of well use it.” The lawyer stands his ground and says RF fields is also important for dealing with the elec- that the engineer will still be stealing. tromagnetic-compatibility (EMC) aspects of every Who is right? The lawyer is correct, even though electronic product, including digital systems. EMC he probably doesn’t know the difference between re- design is concerned with preventing circuits from active and radiating electromagnetic fields. The field producing inadvertent electromagnetic radiation surrounding the power lines is a reactive field, mean- and stray electromagnetic fields. EMC also involves ing that it stores energy as opposed to radiating en- preventing circuits from misbehaving as a result of ergy, so the engineer’s device would in fact be “steal- ambient radio waves and fields.With digital systems’ ing” energy from the power lines. But why? Why do ever-increasing frequencies and edge rates, EMC is some circuits produce fields that only store energy, becoming harder to achieve and is no longer a top- whereas others produce fields that radiate it? ic just for experts. The seemingly mystical process- es by which circuits radiate energy are actually quite THE ENERGY GOES BACK AND FORTH simple. To understand them, you don’t even need To further examine this situation, consider the cir- to know Maxwell’s equations. cuit of Figure 1a. It is a simple circuit consisting of Consider the following fictitious disagreement. an ac power source driving an inductor. If the in- An electrical engineer is telling a lawyer friend about ductor is ideal, no energy is lost from the power sup- a new home-electronics project. The engineer lives near some high-voltage power lines Figure 1 and is working on a device for harnessing the power of the 60-Hz electromagnetic field that permeates his property. The lawyer immediately states that what the engineer plans to do would, in effect, be stealing from (a) (b) the utility company. This statement angers the en- An inductor creates a reactive field that stores energy (a). Adding a second induc- gineer, who replies, “That’s the tor harnesses the reactive field to transfer energy to a load without metallic con- trouble with you lawyers.You de- tact (b).

www.ednmag.com March 2, 2000 | edn 77 242 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

designfeature Electromagnetic fields

ply. The inductor does, however, produce an electromagnetic field. Because Figure 2 no energy is lost, this field is pure- ly a storage field. The circuit pumps pow- er into the field, which then returns en- ergy to the circuit. Because of this energy cycling, the current and voltage of the in- ductor are out of phase by 90Њ, thus pro- ϭ ␻ (a) (b) ducing a reactive impedance, ZL j L. The reactive nature of the impedance ex- plains why such storage fields are called A capacitor creates a reactive field that stores energy (a). Adding a second capacitor harnesses reactive fields. some of the reactive field to transfer energy to a load without metallic contact (b). Referring to Figure 1b, when you place a second circuit consisting of an l=␭ր2 inductor and a resistor near the Figure 3

first circuit, the field from L1 couples to L2 and causes current to flow in the re- sistor. (The coupled fields create a trans- former.) The reactive field transfers en- ergy from the source to the resistor even though the original circuit has not (a) (b) changed. This action suggests that a re- active field can store or transfer energy, The two most basic antennas are a loop antenna whose circumference is equal to the source wave- depending upon what other electrical or length divided by 2 (a) and a dipole antenna whose length is equal to the source wavelength divid- magnetic devices are in the field. So the ed by 2 (b). reactive field “reacts” with devices that are within it. Similarly, a capacitor creates does and that he was just collecting the For a single charged particle, such as a reactive field that can store energy, radiating energy with a receiving anten- an electron, the electric field forms a sim- transfer energy, or do both (Figure 2). na. However, when the engineer meas- ple radial pattern (Figure 4). By conven- Now consider the circuits of Figure 3. ured the field on his property, he meas- tion, the field lines point outward for a An ac voltage source drives two types of ured the reactive field surrounding the positive (ϩ) charge and inward for a neg- ideal antennas, a half-wavelength loop power lines.When he activates his inven- ative (Ϫ) charge. The field remains the and a half-wavelength dipole. Unlike the tion, he is coupling to the reactive field same over time; hence, it is called a stat- previous circuits, the antennas launch and removing energy that is stored in the ic field. The field stores the particle’s elec- propagating fields that continuously car- field surrounding the power lines—en- tromagnetic energy. When another ry energy away from the source. The en- ergy that would otherwise be cycled to the charge is present, the field imparts a force ergy is not stored but propagates from loads. The circuit is analogous to the on the other object, and energy is trans- the source regardless of whether there is transformer circuit in Figure 1b, so the ferred. When no other charged particles a receiving antenna. This energy loss ap- engineer is, in fact, stealing the power. are present, the field has no effect but to pears as resistance to the source in a sim- These examples illuminate the charac- store energy. The fact that energy is trans- ilar manner to how loss in a resistor cor- teristics of reactive and radiating elec- ferred from the field only when another responds to heat loss. tromagnetic fields, but they still do not charged particle is present is a defining Now back to the engineer and the answer the question of why or how radi- characteristic of the static field. As you lawyer. The engineer thought that the ation occurs. To understand radiation, it will soon learn, this fact does not hold power- near his house is best to start with the analysis of the true for a radiating field. was radiating energy the way an antenna field of a point charge. Now consider the same charged par-

TABLE 1—SUMMARY OF FIELD CHARACTERISTICS Near (reactive) field Far (radiated) field Carrier of force Virtual photon Photon Energy Stores energy; can transfer energy via Propagates (radiates) energy inductive or capacitive coupling Longevity Extinguishes when source power is turned off. Propagates until absorbed Interaction Act of measuring field or receiving power from field Act of measuring field or receiving power from field causes changes in voltages/currents in source circuit has no effect on source Shape of field Depends completely on source circuit Spherical waves; at far distances, field takes shape of plane waves Wave impedance Depends on source circuit and medium Depends solely on propagation medium

78 edn | March 2, 2000 www.ednmag.com 10.4. INTRODUCTION TO ANTENNAE 243

designfeature Electromagnetic fields

extend outward at the speed of Figure 4 light. For example, light takes about eight minutes to travel from the sun to earth. If the sun were to suddenly extinguish, people on earth would not know until eight minutes later. Similarly, as a parti- cle moves, the surrounding field continually updates to its new po- sition, but this information can propagate only at the speed of light. Points in the space sur- rounding the particle actually ex- perience the field corresponding to where the particle used to be. This delay is known as time retardation. It seems reasonable to assume that even a charge moving at constant velocity should cause the field lines to bend because of time retarda- tion. However, nature (that is, the electromagnetic field) gets around the delay by predicting where the particle will be based on its past ve- locity. Therefore, field lines of par- ticles moving at constant velocities do not bend. This behavior stems from Einstein’s theory of special relativity, which states that veloci- ty is a relative—not an absolute— measurement. Furthermore, the bent field lines of the charge cor- respond to radiating energy. Therefore, if the field lines are straight in one observer’s reference frame, conservation of energy re- quires that all other observers per- (a) (b) ceive them as straight. You can show the electric field of a static charge (a) or a dipole (b) as a vector plot, a streamline plot, and a log-magnitude contour plot. A CURIOUS KINK To understand why the bent ticle moving at a constant velocity, much ier, the rest of this article ignores the field lines of a charge correspond to ra- lower than the speed of light. The parti- magnetic field. diated energy, consider a charged parti- cle carries the field wherever it goes, and, When a charged particle accelerates, cle that starts at rest and is “kicked” into at any instant, the field appears the same the lines of the electric field start to bend motion by an impulsive force. When the as in the static case (Figure 5a). In addi- (Figure 5b). A review of Einstein’s theo- particle accelerates, a kink appears in the tion, because the charge is now moving, ry of relativity helps to explain why the field immediately surrounding the parti- a magnetic field also surrounds the bending occurs: No particle, energy, or cle. This kink propagates away from the charge in a cylindrical manner, as gov- information can travel faster than the charge, updating the rest of the field that erned by Lorentz’s law. This magnetic speed of light, c. This speed limit holds has lagged behind (Figure 5c). Part of the field is a consequence of the fact that a for fields as well as particles. For that energy exerted by the driving force is ex- moving electric field produces a mag- matter, a field is just a group of virtual pended to propagate the kink in the field. netic field and vice versa. As with a stat- particles (see sidebar “Quantum physics Therefore, the kink carries with it ener- ic charge, both the electric and magnet- and virtual photons”). For instance, if a gy that is electromagnetic radiation. ic fields of a constant-velocity charge charged particle were suddenly created, Fourier analysis shows that because the store energy and transmit electric and its field would not instantly appear every- kink is a transient, it consists of a super- magnetic forces only when other charges where. The field would first appear im- position of many frequencies. Therefore, are present. To make the description eas- mediately around the particle and then a charge accelerating in this manner si-

80 edn | March 2, 2000 www.ednmag.com 244 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

designfeature Electromagnetic fields

multaneously radiates en- celeration, but also to quantum-energy- Figure 5 ergy at many frequencies. state (orbital) changes of electrons You can also analyze bound into atoms. this phenomenon from a kinetic-energy perspec- THE FIELD OF AN OSCILLATING CHARGE tive. In freshman physics, A charge moving in a circle experi- you learned that it takes a ences a sinusoidal acceleration. In fact, si- (a) force to accelerate a parti- nusoidal acceleration occurs for a charge cle and that the force moving in any oscillatory manner. In this transfers energy to the case, the “kinks” in the field are continu- particle, thus increasing its ously varying and sinusoidal, and the kinetic energy. The same electromagnetic radiation occurs only at analysis holds true for the the frequency of oscillation. An oscillat- particle’s field. Energy is ing charge produces rippling waves that (b) required to accelerate the propagate outward, in some ways similar field. This energy propa- to the waves produced when you toss a gates outward as a wave, pebble into a pond (Figure 6b). increasing the field’s ki- If you connect a constant voltage netic energy (Figure 6a). across a length of wire, the voltage caus- All electromagnetic ra- es a proportional current governed by diation—be it RF, ther- Ohm’s law (IϭV/R). The dc current mal, or optical—is created traveling in a wire consists of migrating (c) by changing the energy of electrons. Although the path of each in- electrons or other charged dividual electron is random and com- The electric field follows a particle moving to the right with con- particles. This general plex, the average movement of the elec- stant velocity (a); the electric field follows a particle moving to statement applies not trons, considered as a group, causes a the right with constant acceleration (b); the electric field follows only to free-electron-en- constant drift of charge. Therefore, at a a particle coming into motion from a resting condition (c). Parti- ergy changes that result macroscopic level, you can ignore the cle locations and field lines at earlier times appear in gray. from acceleration and de- specifics of each electron and model the current as a fictitious charge traveling at a constant ve- Figure 6 locity. Radiation does not occur because the effective charge travels at a constant velocity and experiences no acceleration. (Collisions at the atomic level cause ran- domness in the electron movement. This random component of motion pro- duces thermal radiation and electrical noise, which are not germane to this discus- sion.) If the voltage across a wire slowly oscillates in time at

frequency fo, the accompa- nying electric field takes the same form as that of the dc

These log-magnitude plots show the electric field of accelerated charges. A charge starts at rest and is accelerated by a short impulsive force (a). A charge starts at rest and is sinusoidally accelerated along the horizontal (a) (b) axis (b).

82 edn | March 2, 2000 www.ednmag.com 10.4. INTRODUCTION TO ANTENNAE 245

designfeature Electromagnetic fields

charge, except that the magnitude the length of the wire because both pa- varies between positive and nega- Figure 7 rameters increase the amount of moving tive values (Figure 8). charge. The radiation power is also pro- portional to the frequency because the RADIATION FROM OSCILLATING CHARGES charge experiences a greater acceleration Relating frequency to wavelength by at higher frequencies. (Imagine yourself ␭ϭc/f, you can define a slow oscillation on a spinning ride at an amusement as any frequency whose corresponding park. The faster it spins, the greater the wavelength is much greater than the acceleration you and your lunch feel.) length of the wire. This condition is of- Expressed algebraically, Radiated pow- ten called quasistatic. In this case, the cur- er~currentϫlengthϫfrequency. rent in the wire varies sinusoidally, and This expression clearly shows why RF the effective charge experiences a sinu- signals radiate more readily than do low- soidal acceleration. Consequently, the os- er frequency signals, such as those in the cillating charge radiates electromagnet- In this depiction of the electric field surround- audio range. In other words, a given cir- ing a wire carrying a dc current, shades of gray ic energy at frequency fo.The power cuit radiates more at higher frequencies. (energy per time) radiated is propor- denote the relative voltage levels inside the Because wavelength is inversely propor- tional to the magnitude of current and wire. Magenta arrows denote the current. tional to frequency (?ϭc/f), an equiva-

QUANTUM PHYSICS AND VIRTUAL PHOTONS Quantum physics was born just mental unit, Planck’s constant. looking at, you shine a light principle and the law of conser- 100 years ago. In 1900, Max Mathematically, you can state source (or electron beam) on the vation of energy. You can never Planck presented his theory on this concept as E=h␯, where E is object. Although the light beam directly observe or measure the quantization of energy levels energy, h is Planck’s constant, may not have much conse- these ephemeral particles, hence of thermal radiation. Five years and ␯ is the frequency of the quence when you measure large the term “virtual particles.” later, Albert Einstein further ex- photon. Another principle of objects, such as a baseball, it Now back to electromagnetic panded quantum physics when quantum physics is the Heisen- drastically changes the position, fields. The stored energy in an he postulated that all energy is berg uncertainty principle. To momentum, or both of tiny ob- electromagnetic field allows the quantized. At the core of his the- Einstein’s dismay, it was his jects, such as electrons. The only creation of these virtual particles. ory was the notion that light and quantum theory that led directly way to get around this problem These particles carry the electro- electromagnetic radiation in gen- to the uncertainty principle, would be to reduce the power of magnetic force in reactive or eral are quantized into particles which states that all measure- the light source to an infinitesi- nonradiating fields. The particles called photons. This concept ments have inherent uncertainty. mal level. But Einstein’s quantum have all of the properties of the points out the bizarre wave-parti- Einstein expressed his dislike theory limits how small the ener- real photons that make up radiat- cle duality of light. In some ways, of this uncertainty when he said, gy can be. To observe any object, ing fields except that they are electromagnetic radiation acts as “God does not play dice,” but he you must transfer an amount of fleeting in time and can never ex- a distributed wave of energy. In could never disprove the exis- energy, EϾϭh␯, which alters ist unless their source is present. other respects, radiation acts as a tence of uncertainty. Specifically, the state of the object you are An electron in free space is a localized particle. the uncertainty principle states observing. Hence, the very act of good example. It is surrounded Now what about the energy in that you can never know both measuring or interacting with a by a static electric field that stores static or reactive electromagnetic the exact position and momen- particle changes its position and energy. When no other charged fields? Quantum physics states tum of any particle. Mathemati- or momentum. Thus, you are left particles are present, the virtual that any energy must consist of cally, the bounds on the errors in with bizarre consequences that photons that constitute the field individual packets, or quanta, but determining position (⌬x) and even Einstein didn’t foresee. appear and disappear unnoticed this statement implies that even momentum (⌬p) are related as From another point of view, without transferring energy. Now, the static field must consist of follows: ⌬p⌬xͧh/(4␲). the uncertainty principle states if you place a second charge near particles. In fact, the static field The uncertainty principle is not that particles of small enough the electron, the electron’s virtual does consist of particles—virtual a limit set by the accuracy of energy and short enough life photons transmit a force to the photons. To explain virtual pho- measuring equipment. It is a fun- spans can exist, but you can nev- charge. In a reciprocal manner, tons, step further into the strange damental property of nature. er measure them. This idea is the virtual photons from the field world of quantum physics. This concept is straightforward: stated as: ⌬E⌬tͨh/(4␲). of the charged particle transmit a At the crux of quantum To measure a particle, you must This expression allows for “vir- force to the electron. This strange physics is the idea that all elec- interact with it. Think about look- tual particles” to spontaneously behavior is how electromagnetic tromagnetic energy is transferred ing at small objects through a mi- appear and disappear as long as force operates at the quantum in integer quantities of a funda- croscope. To see what you’re they obey both the uncertainty level.

84 edn | March 2, 2000 www.ednmag.com 246 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

designfeature Electromagnetic fields

lent expression is: Radiated power~cur- impedance purely real, and the radiation comes negligible. These nonconstant rentϫlength/wavelength. pattern is simple (single-lobed) and terms taken together represent the pow- Hence, at a given source voltage and broad. er in the reactive field. frequency, the radiated power is propor- The boundary between the near and tional to the length of the wire. In other NEAR AND FAR FIELD far fields is generally considered to fall at words, the longer you make an antenna, As mentioned earlier, an ac circuit has about ␭/(2␲). Furthermore, the reactive the more it radiates. a reactive field and a radiating field. The field typically becomes negligible at dis- Until now, the discussion has dealt reactive field of an ac source circuit or tances of 3 to 10␭. It is interesting to only with slowly oscillating fields. When system is often called the near field be- compute the boundary at different fre- you increase the frequency of the voltage cause it is concentrated near the source. quencies. At 60 Hz, the boundary is 833 source so that the wavelength is approx- Similarly, the radiating field is referred to km. Therefore, almost all cases of 60-Hz imately equal to or less than the length of as the far field because its effects extend interference occur in the near (reactive) the wire, the quasistatic picture no longer far from the source. Here’s why. field.At 100 MHz, the boundary is 0.5m, holds true. The current is no longer equal You can represent the power density of making this frequency useful for radio throughout the length of wire (Figure 9). an electromagnetic field at a distance, r, communication. At 5x1014 Hz (optical In fact, the current points in different di- from the source by a series in 1/r: waves), the boundary is 0.1 ␮m, explain- ϭ ϭ 2ϩ rections at different locations. These op- Field power density PD C1/r ing why optical sources such as light 3ϩ 4ϩ posing currents cause destructive inter- C2/r C3/r .... bulbs always appear as radiating sources ference just as water waves colliding from Now, imagine a sphere with radius, r, and never as reactive sources. opposite directions tend to cancel each centered at the source. You can calculate The near and far fields have other other out. The result is that the radiation the total power passing through the sur- characteristics. The shape of the near is no longer directly proportional to the face of the sphere by multiplying the field is closely related to the structure of wire or antenna length. power density by the sphere’s surface the source, whereas the far field becomes Figure 10 shows a plot of radiated area: independent of the source, taking the power as a function of antenna length. Total power leaving sphereϭPϭ form of spherical waves. At large dis- ␣␲ 2 ϭ ␲ ϩ ϩ 2ϩ When the antenna is smaller than a wave- (4 r )PD 4 (C1 C2/r C3/r ...). tances, the far field takes the form of trav- length, the radiated power is roughly When you examine this formula, you eling plane waves. The requirement for proportional to the length. However, for can see that the first term is purely a con- the plane-wave approximation is ϩ 2 ␭ wire lengths near or above a wavelength, stant. For this term, no matter what size r>2(ds dr) / , where ds is the size of the the radiated power relates as a slowly in- you make the sphere, the same amount source antenna, dr is the size of the re- creasing and oscillating function. So, why of power flows through it. This result is ceiving antenna, and r is the distance be- is a length of ␭/2 usually chosen for di- just a mathematical way of showing that tween the antennas. The wave impedance pole antennas (␭/4 for a monopole)? The power flows away from the source. (ratio of electric- to magnetic-field mag- “diminishing returns” of the radiated Therefore, the first term is due solely to nitude) of the near field is also a function power versus wire length partially ex- the radiated field. Also, as r gets large, all of the source circuit, whereas in the far plain why dipole antennas’ length is usu- the other terms become negligible, leav- field, the wave impedance, ␩, depends ally chosen to be less than a wavelength). ing only the radiated term. Conversely, at only on the medium (␩ϭ377⍀ in free The length of ␭/2 is chosen because at close distances (small values of r), the space). Figure 11 graphs the wave im- this wavelength, the antenna is electri- nonconstant terms become much larg- pedance as a function of distance. Table cally resonant, which makes its electrical er, and the constant radiating term be- 1 summarizes the field characteristics.

Figure 8 Figure 9

(a) (b) (a) (b) The electric field surrounding a wire carries a rapidly varying ac The electric fields surrounding a wire carry a slowly varying ac current. Magen- current. Magenta arrows denote the current, and shades of gray ta arrows denote the current, and shades of gray denote the relative voltage denote the relative voltage levels inside the wire at time tϭ0 levels inside the wire at time t50 (a) and at time tϭT/2, a half-cycle later (b). (a) and at time tϭT/2, a half-cycle later (b).

86 edn | March 2, 2000 www.ednmag.com 10.4. INTRODUCTION TO ANTENNAE 247

designfeature Electromagnetic fields

Stationary charges and charges mov- ing with constant velocity produce Figure 10 reactive fields; accelerating charges produce radiating fields in addition to the reactive field. DC sources cause a constant drift of charges and hence pro- duce reactive fields.AC sources cause the acceleration of charges and produce both reactive and radiating fields. Radiating RADIATION POWER (W) fields carry energy away from the source regardless of whether there is a receiving circuit or antenna. In the absence of an- other circuit, reactive fields store energy capacitively, inductively, or both ways. In the presence of another circuit, reactive fields can transfer energy through in- ductive or capacitive coupling. In gener- al, radiation increases with frequency and ANTENNA LENGTH (WAVELENGTHS) antenna length. Similarly, radiation and transmission-line effects are usually neg- Radiated power emitted from a dipole antenna is a function of the antenna length. The source cur- ligible when wires are much shorter than rent is 1A. a wavelength. The reactive field’s charac- teristics depend greatly on the source cir- cuit. The radiating field’s characteristics, such as wave impedance, are in- Figure 11 dependent of the source.˿

References 1. Feynman, R, R Leighton, and M Sands, The Feynman Lectures on Physics, Addison-Wesley, 1963. 2. Epstein, L, Thinking Physics—Is WAVE IMPEDANCE Gedanken Physics; Practical Lessons in (⍀) Critical Thinking, Second Edition, Insight Press, 1989. 3. Marion, J and M Heald, Classical Electromagnetic Radiation, Second Edi- tion, Academic Press, 1980. 4. Eisberg, R and R Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, Second Edition, John Wiley DISTANCE FROM ANTENNA & Sons, 1985. (WAVELENGTHS) 5. Baylis, W, Electrodynamics, A Mod- ern Geometric Approach. Birkhaeuser, Compare the wave impedance as a function of distance from a loop antenna (as in Figure 3a) with 1999. that of a dipole (as in Figure 3b). In the near field, the loop antenna’s radiated energy is mostly 6. Jackson, J, Classical Electrodynamics, magnetic. At close range, the dipole antenna’s radiated energy is mostly electric. In the far field, Second Edition. John Wiley & Sons, 1975. the division between electric and magnetic energy is the same for both antenna types. 7. Ramo, S, J Whinnery, T VanDuzer, Fields and Waves in Communication Elec- Van Nostrand Reinhold Co, 1988. and products. Recently, he has worked on tronics, Second Edition, John Wiley & 11. Bansal, Rajeev,“The far-field: How 315- and 915-MHz surface-acoustic-wave Sons, 1984. far is far enough?” Applied Microwave & (SAW) oscillators for polymer-coated SAW 8. Georgi, H, “A unified theory of ele- Wireless, November 1999. chemical sensors. He has a BSEE from Cor- mentary particles and forces”, Scientific nell University (Ithaca, NY) and an MSEE American, April 1981. Author’s biography from the University of Pennsylvania 9. Paul, C, Introduction to Electromag- Ron Schmitt is the director of electrical en- (Philadelphia). He is currently a PhD can- netic Compatibility, John Wiley & Sons, gineering support for Sensor Research and didate at the University of Maine (Orono, 1992. Development Corp (Orono, ME), where he ME). He enjoys sports and traveling with 10. Lo, Y and S Lee, Antenna Hand- manages the group responsible for elec- his wife, Kim. You can reach Ron Schmitt book—Theory, Applications, and Design, tronics design for chemical-sensor research at [email protected].

88 edn | March 2, 2000 www.ednmag.com 248 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

10.5 Dipole antennae

A dipole antenna is the most basic antenna; the construction of such a general dipole antenna is shown in figure 10.2. The exact mathematical analysis is quite complex and utterly useless in the context of this book: we only want to transmit and only want to use the electrical properties of antennas. This section therefore only presents some formulas (yes formulas, just substitute numbers and you’ll get another number out of it) to be able to calculate some electrical properties.

iin l

vin

iin

Figure 10.2: The layout of a dipole antenna: the currents in both branches of the dipole run in the same direction, causing an additive field

The antenna impedance as seen from a driving circuit can be calculated... but that is quite complex. The end result of the calculation is shown in figure 10.3: this figure shows the impedance on the antenna connection as a function of l/λ, where we have separated the real and reactive part. Note that the antenna — depending on frequency (thus wavelength) and antenna length — can vary between a low and high resistance, with a capacitive or inductive characteristic.

1000

antenne

W R antenne

antenne X

R 0

antenne

antenne

X

antenne X

antenne

R,X[] -1000 0 12 l/l Figure 10.3: Antenna impedance as seen on the connector of the antenna, as a function of l/λ. For a =10−5λ: R real part, X is the reactive part. 10.5. DIPOLE ANTENNAE 249

Calculating this can be done using the following set of equations, in case you’d be interested or if you want to get a numerical value. The impedance of the antenna connector is related to the “internal impedances” as follows: 1   Z = R + jX antenna 2( πl ) rad a sin λ  2 2 π cos( πl cos(θ)) − cos( πl ) = Prad = η λ λ Rrad 2 dθ I 2π 0 sin(θ) antinode          η 2πl 2πl 2πl 4πl X = 2S + cos 2S − S − a 4π i λ λ i λ i λ           η 2πl 2πl 4πl 4πa2 sin 2C − C − C 4π λ i λ i λ i lλ with x sin(x) ∞ cos(x) Si(x)= dx Ci(x)=− dx 0 x x x In vacuum or air, the radiation resistance of a half-wave dipole antenna equals 73.14 Ω. If the antenna shows no other significant resistive losses due to e.g. Ohmic losses in the antenna, then the radiation resistance is equal to the resistance of the antenna as a whole. The impedance of an antenna consists of a resistance and a reactance (inductive or capacitive). For a half wave dipole, the antenna thickness does not seem to be of importance and its reactance in vacuum or air is j42.55 Ω. For other antenna lengths the relation is most certainly dependent on the thickness of the antenna. For the half wave dipole the reactance is — as you can see from the equation — positive for certain values of l/λ, and negative for other values. It might seem scary, a negative reactance, but it isn’t. As you might know, the impedances of reactive elements are −1 Z = 1 = −j ⇔ X = C jωC ωC C ωC ZL = jωL ⇔ XL = ωL thus a positive X corresponds with an inductance and a negative X with a capacitance. Noting unusual here. 250 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

10.6 Monopole antennae

A monopole is nothing else than half of a dipole antenna. A dipole antenna is symmet- rical: it is driven at the center and both halves do exactly the same thing concerning the radiation, impedance and some other stuff. In a monopole, this symmetry is re- moved, simply by throwing away half of the dipole10. That is basically it. Hence, the differences between a dipole and a monopole are small:

• the antenna radiates only half its power: the Rrad of a monopole is only half that of a dipole.

• in the case of a dipole, you drive both sides of the antenna, and electrically speaking that means driving 2 impedances in series. From this it may be clear that the impedance of a monopole is half that of the dipole. l/ 2

iin vin

iin l

Figure 10.4: A monopole is one of the two symmetrical halves of a dipole, assuming that there is a groundplate in the exact center of the two original dipole halves.

The equivalent of, for instance, a half wave dipole, is now a quarter wave monopole, with:

half wave dipole quarter wave monopole length l λ/2 λ/4 Rrad 73.14Ω 36.57Ω Rantenna 73.14Ω 36.57Ω Xantenna j42.5Ω j21.25Ω

Table 10.1: A few antenna characteristics

10Indeed, this is bad for the environment, you should have built a monopole in the first place. 10.6. MONOPOLE ANTENNAE 251

10.7 Other antenna characteristics

We can keep on talking about antennas just about forever, but for this introductory course, that would not be very useful. However, it is useful to get acquainted with a few concepts concerning the antenna: the most important ones are discussed below.

Directivity and gain The terms directivity and gain of an antenna are often mixed up. The di- rectivity of an antenna gives a measure of how well the antenna is capable of bundling its radiation to a specific direction. It doesn’t matter whether or not it is the transmit or receive antenna, since antenna = antenna. • a so-called isotropic antenna transmits or receives just as well in every direction. This is an ideal antenna, which will never be realised. An isotropic antenna is usually used as a reference to relate directivity to. • an omni-directional antenna is another imaginary antenna, which transmits and receives very well within a certain area. • a dipole and monopole transmit and receive around the antenna: they look a bit like an omni- directional antenna. Numerically, the directivity or gain of an antenna give the ratio of transmitting power in one specific direction, related to the transmitting power of an isotropic antenna: P G = max,antenna Pisotropicantenna where the value of G is usually given in dBi: the gain or directivity compared to an isotropic antenna. The gain or directivity of a dipole antenna is 2.15 dBi. Antennae with a high gain must also be aimed correctly, antennae which need no aiming have, by definition, a low gain. 252 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

10.8 A transmission system, a bit more exact

Figure 10.1 shows a rather simple representation of a transmit and receive system. Below, we have a more complete representation, where the signal to be transmitted is also explicitly applied to a block which takes care of the modulation. For transmitting e.g. audio in the FM band this block will modulate the signal about a carrier wave of about 100 MHz using FM modulation.

Pout modulation & oscillation Pin

VDD VDD - modulation & PA match match LNA + oscillation vin vout

Figure 10.5: A transmit and receive system, a bit more exact

The “match” block between the power amplifier and the antenna ensures that the total load impedance seen from the PA is optimal. The results from §10.5 show that the antenna impedance can be heavily reactive (inductive or capacitive) which makes a rubbish impedance. Using a so-called matching network to compensate this reactive part significantly improves the output power. The receiver block looks quite a bit like the transmit part. Firstly, the antenna sig- nal is amplified using something called a Low-Noise Amplifier to amplify a lot without adding a lot of noise. Demodulation can be done in a number of ways. In the figure, the inverse of the modulation operation is used to retrieve the original signal vin. Also this can be done in a number of ways; the most simple of which is using a feedback A ≈ 1 system with transfer H = 1+Aβ β as shown in figure 10.5. The only difference with the systems from chapters 6 and 7 is that the op-amp block for FM demodula- tion has its input signal in the frequency domain, with the output signal in the voltage domain. Then the “opamp” input circuit compares frequencies and using the transmit- ter’s modulator in the feedback loop effectively demodulated the FM-modulated input signal. The hard part is usually getting sufficient gain and sufficiently low noise at the high frequencies used. Diving into receivers is not a subject of this book.

10.9 In addition

Things directly related to transmitters are obviously the maximization of the output power and the (shape of the) spectrum of the generated wave which you put in your antenna. These two issues might make you think about both impedance matching and about Fourier transforms. Good. You should just know that you do not want to apply impedance matching and that Fourier transforms are frequently carried out the wrong way. That is why this section may be useful. 10.8. A TRANSMISSION SYSTEM, A BIT MORE EXACT 253

10.9.1 Impedance matching and maximum power transfer

We can easily calculate the power ending up in a load impedance Rload, for a source with a certain source impedance Rsource. The maximum power as a function of Rload can be obtained by differentiation:

2 Pload = Iload · Rload  2 Vsource = · Rload Rload + Rsource − ∂Pload = 2 · Rsource Rload Vsource 3 ∂Rload (Rload + Rsource) We directly see that the power in the load is at its maximum if the load resistance is equal to the source re- sistance: Rload = Rsource. In a similar fashion, we find that the power transfer with complex impedances ∗ is highest for Zload = Zsource. A fairly simple result. But also a result which you should not use for a power amplifier...

Rsource Iload

load

source

R

V

An amplifier (in gray) with a load The optimum above is always true, since it is a mathematical truth. However, it is only true if you assume an ideal source with a certain — fixed — source impedance, which you certainly do not have if you design your own power amplifier. If you design an amplifier, then you have to deal with limitations in output voltage and current, and you have a degree of freedom in the output impedance of your amplifier. If you want a maximum output power, then you have to take these conditions into account, which results in different requirements for the optimum load impedance. The other two (partial) derivatives are: −2 · ∂Pload = 2 · Rload Vsource 3 ∂Rsource (Rload + Rsource) 2 · ∂Pload = · Rload Vsource 2 ∂Vsource (Rload + Rsource) from which it follows that

• the partial derivative ∂Pload/∂Rsource ≤ 0 for every source voltage (amplitude) and combina- tion of resistors. Hence, to obtain maximum power into the load, the output impedance of the amplifier Rsource must be as low as possible.

• the partial derivative ∂Pload/∂Vsource ≥ 0, so to obtain maximum power in the load, the voltage (amplitude) Vsource must be as large as possible, • after satisfying the previous conditions, the load impedance should be equal to the conjugate of the output impedance of your amplifier OR, better yet, should be dimensioned in such a way that the PA at maximum power settings simultaneously reaches its maximum voltage and current handling capabilities. For a real amplifier, the first item just has to be designed by you, the circuit designer. The second item is usually limited by things like clipping to the supply voltage. The third item is very much related to the first two. A reasonable design strategy to create a maximum output power with a real power amplifier could be:

1. design an amplifier with a low output resistance Rload, with a large amplitude of the (unloaded) output voltage Vout,open, with a large maximum output current Iout,max. ∗ 2. perform a conjugated impedance matching, Zload = Zsource, but only if the amplifier is not clipping its current or voltage. In reality, you will never be able to do this. 254 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

3. Take the load impedance for which the maximum output current can also create the maximum Vout,open output voltage: Rload = . If this impedance is not equal to the real load impedance, Iout,max because it is an antenna, speaker, laser or ... then you should design and include an impedance transformation network to solve the problem.

10.9.2 Fourier transformations, FFT and more

Many performance analyses for circuits are performed using sine waves: you input one or multiple sines and you inspect the output signal. Depending on the actual output spectrum of the system, conclusions can be drawn on the distortion and non-linear effects of the system. Similarly, a harmonic oscillator must create a neat sine by itself at only one certain frequency; analysing the output spectrum reveals the quality of that oscillator. To find the quality of any sine wave, you can analyze the shape of the wave (in a simulation or on the scope) and notice that everything looks like normal sine waves. Obviously, that type of analysis is crap: our eyes are conditioned to spot straight lines and anything that vaguely resembles a sine appears to be a nice sine for humans. For example, sine waves with distortion levels up to tens of percents are easily perceived as undistorted. A more reliable method for evaluating the quality of a sine, is analyzing its frequency components. This can be performed with analog and digital filters, as well as with Fourier transformations. Whether it is a general Fourier transform or a faster version (an FFT) does not matter that much. The principle is:

A Fourier transform determines the correspondence between your signal on the interval [0,T] and a sine and cosines with a period T/n.

This principle results in a number of requirements for the signal used for a Fourier transform:

• the signal is (near) periodical in [0,T], and so

• all initial start-up effects have to be outside of this interval, and

• if you want a large frequency resolution, then n has to be large

If you don’t set or select your signal carefully, then the transformation cannot be performed correctly, or you will sample the DFT incorrectly. This then shows up in the spectrum as wide (with skirts) frequency peaks and a high noise floor. Using so-called windows — the best known are the Hamming, rectangular, Gauss and Blackman windows — lets you input your signal in a less accurate way; the windowing takes care of suppressing inaccurately inputting your signal into the Fourier transform. The next two figures show how terribly wrong the operation can go for a few simple sine waves. And if it goes wrong for a simple sine, it will most definitely go wrong for an arbitrary signal, composed out of many different sine waves. The used waveforms in this example are:

fit(t)=cos(2· π · ffit · t) nofit(t)=cos(2· π · f · t) nofit   1 ( )=cos(2· · · ) · 1 − −t/τ start t π ffit t 2 e

The waveform fit(t) is chosen such that an FFT samples an exact integer number of periods. The figure below gives the waveform and the samples (circles). The corresponding spectrum, as calculated with the FFT, is given in the next figure: there is a very sharp peak at only the signal frequency. The rest of the spectrum is at -320 dB, which is the noise level of the floating point calculations used here. 10.8. A TRANSMISSION SYSTEM, A BIT MORE EXACT 255

1

past(t) 010203040506070 sample

1 (a)

1 1

nofit(t) start(t) 010203040506070 010203040506070 sample sample

1 (b) 1 (c)

Three examples: (a) a signal which exactly fits the interval (b) a signal that does not fit exactly and (c) a signal which fits with a startup effect. The waveform nofit(t) is also a sine, but with a slightly lower frequency: if you now sample the wave- form, see curve b in the figure above, then you see that the last sample actually corresponds to the next sine sweep. This waveform does not fit the sample period. Only 1 of the (in this case 64) sam- ples might not seem like a big deal, but the resulting spectrum is not very good, see the next figure. 0

20?log fft(start) -40 ()

? -80 20 log() fft(nofit)

-120

20? log fft(past) -160 ()

freq The FFT spectra for the signals from the previous figure. If you get the sample time and samples exactly right with the frequency of a signal, then an FFT will give a good spectrum. However, if you have some startup effect, then the spectrum could still look rubbish. All these non-ideal effects are due to an incorrect application of an FFT or FT. Obviously, performing an (F)FT on a non-stationary signal — eg: with startup effects) — is a bad idea if you want the spectrum of that stationary signal. Other effects in the previous figure (showing the spectrum) are due to the FFT algorithm sampling the underlying DFT at suboptimum points. This can be solved by sampling on the correct ones, which can be done in two ways:

• on the one hand, you could sample at exactly the right time, causing the FFT to sample the frequency spectrum of the underlying DFT at the correct points. Therefore, you must use an integer number of periods in your FFT, see curve “fft(fit)” in the previous figure.

• on the other hand, you could increase the frequency resolution of the DFT, causing the FFT to sample closer to the ideal frequencies. The latter can be obtained by sampling for a very long time (the best way) or virtually increasing your sample by padding: adding samples with a value 0... The figure below shows the DFT (with padding) for the non-fitting signal (curve b) in the figure that shows the signals and the sampling points. 256 CHAPTER 10. INTRODUCTION TO RF ELECTRONICS

0

-20 DFT -40

-60

-80 FFT1 FFT2

freq The DFT and FFT spectra for the non-fitting signal (two figures back, curve b). Curve FFT1 is the same as fft(nofit) in the previous figure, but with a different scale. Here, we used a relative low amount of samples, causing the frequency resolution to be low and — in this case — also causing the FFT to sample the underlying DFT at the wrong frequencies. The circles denote the values which are given by the FFT. Curve FFT2 is made with many more sample points of the original (non-fitting) signal. Since there are many more points available, the frequency resolution increases, resulting in a curve that is much neater than the DFT with padding. In short: the cause of bad results from an FFT is usually plain laziness or not understanding the back- ground of an (F)FT. If you are lazy or do not want to know the backgrounds: use many samples and a masking window to mask a number of unwanted effects and possibly use padding. In all other cases: “aim” your sample a bit better, and the quality of the result increases tremendously. Chapter 11

Digital Circuits

11.1 Introduction

Digital circuits use high and low levels (“ones” and “zeros”). The input signals of a logical port cause a high or low signal to appear at the output. There are many types of different logical ports, such as the AND, NAND, OR, NOR etc. With these ports, logical functions can be constructed.

1 A

0 t

1 B

0 t

1 y

0 t

Figure 11.1: Input and output of a NOR port

When looking at the digital functionality of a logical port, the input can only con- sist of 1’s and 0’s. Also, depending on the input signals, a 1 or a 0 will instantly appear at the output. Figure 11.1 shows the input and output signals for a NOR port with ideal digital signals; all 1’s and 0’s have the same level, and there is a clear, fixed difference between the levels for a 1 and a 0. When the input signal changes, the output changes instantly. Furthermore, the transition between 1 and 0 is infinitely fast. Systems with these properties would allow the design of incredibly fast and reliable computers. Unfortunately such an ideal digital system doesn’t exist in reality: the building blocks of digital systems are real, analog components. These add a certain delay, have a maximum speed, add noise, etc. Only relatively slow digital systems can be regarded as simply as the previous example. Increasing the speed introduces many analog aspects that make the subject hard, but interesting. This chapter will deal with some of those aspects.

257 258 CHAPTER 11. DIGITAL CIRCUITS

11.2 Designing logical building blocks

11.2.1 Basic logic ports Digital circuits implement (boolean) logical functions like combinatorial logic, CPU’s, encoders and more. The three basic operations in boolean algebra are AND, OR and NOT (an inverter): with these functions it is possible to make all digital functions. Since a NOR or a NAND can be used to make either an OR, an AND or a NOT, every function can be made with a NOR or a NAND1. In this section the NOR will be discussed using knowledge from the previous chapters.

11.2.2 The relation between “high” (digital) and “high” (analog) Table 11.1 shows a truth table for a 2-input NOR. In digital coding only two signal levels are used: an ideal 0 and an ideal 1. However, a digital circuit is built using analog components that operate on analog signals. Therefore, a relation needs to be found between the digital representation from figure 11.1 and a real, physical signal.

2-input NOR x 0 1 1 0 y 0 0 1 1 z=x + y 1 0 0 0

Table 11.1: Input and output of a NOR

A simple solution would be to choose the digital high level equal to the supply voltage

V VDD ' 1 '

0 ' 0 ' time

Figure 11.2: First choice for analog high and low and the low level equal to ground (see figure 11.2). However, this leads to problems for a number of reasons: • Every power supply has an internal resistance leading to a difference in voltage for every little current. The “high” and “low” levels, being equal to the supply voltage or ground of a certain port, can therefore vary slightly between one port and the next. This would lead to errors in the digital circuit. • Every circuit has a certain output impedance and input impedance. If these are both resistive the input signal of a digital circuit will never be equal to the supply voltage or ground, because Vin = Vout,ideal · Rin/(Rin + Rout). The digital circuit would only produce errors.

1More specifically: with a NOR or a NAND having at least two inputs. With a single-input NOR or NAND you can only invert or not invert the signal. 11.2. DESIGNING LOGICAL BUILDING BLOCKS 259

• If the input impedance is capacitive the digital circuit can operate properly with “high” equal to the supply voltage and “low” equal to ground. The only problem is that the perceived input voltage after a signal transition changes according to V (t)=V (0) + ΔV (1 − e−t/RC ). For the signal to be exactly equal to “high” or “low”, one would have to wait for an infinite time.

Evidently, the choice we made for “high” and “low” was not a good one: we either get a lot of errors or the processing speed is extremely low. A different choice might be to define the threshold between a high or a low signal at half the supply voltage. Ev- erything above this threshold we call “high” and everything below it “low” (see figure 11.3a). Again, a disadvantage is that variations in the voltage (which will inevitably occur) can cause a 0 to turn into a 1 and vice versa. To reduce the likelihood of errors

VDD VDD ' 1 ' ' 1 '

‘undefined' V V ' 0 ' ' 0 '

0 0 time time

Figure 11.3: Threshold high/low a) is in the middle b) has an undefined area as a result of noise we choose an area between 0 and 1 that is undefined (see figure 11.3b). The closer the borders of this area are together, the more likely it is that a digital 0 will change into a digital 1 or vice versa. The larger we make this area, the longer the signal will be undefined, leading to a slower system. 260 CHAPTER 11. DIGITAL CIRCUITS

11.3 Old solutions: DL, DTL and TTL

11.3.1 Diode logic

As mentioned before NOR ports and/or NAND ports are needed to make any digital function. These operate by transforming the inputs and outputs from a digital 1 or 0 to an analog voltage. In chapter 1 a number of logic ports were described using diode logic.

ROUT

vOUT

vX

vY

Figure 11.4: AND-port in diode logic

In chapter 1 an OR is given as an example of a digital circuit using diode logic; in figure 11.4 the diode logic implementation of an AND is given. Analysis of its function using the method from chapter 1 yields:

• The output voltage is high when neither diode is conducting, because the resis- tance is connected with VDD.

• If either diode is conducting (vX or vY is low) the output voltage becomes equal to the low input voltage plus 0.6-0.7V (assumption for the voltage lost over the conducting diode).

It is impossible to realize an inverter with diode logic. Therefore, making a NAND or NOR is also impossible and not every digital function can be realized using only diodes. Only a few simple functions are possible. Even for functions that can be realized in diode logic there are some (analog) limitations. When a logic port follows after another, the output signal of the first must be high enough to correctly operate the next. However, when using diode logic a loss of voltage level per port is introduced. This can be as high as the forward voltage of a diode (around 0.6V). Using a supply voltage VDD, this leads to a maximum of 2 VDD/Vdiode,on cascaded diode logic ports .

11.3.2 Diode-transistor and transistor-transistor logic

The two large problems in diode logic are that the output voltage decreases with every diode in the signal path and that making an inverter is impossible. Both problems can be solved by adding an amplifying element that has a negative gain. Adding this to our existing diode logic this leads to diode-transistor logic: DL with an inverting amplifier, 11.3. OLD SOLUTIONS: DL, DTL AND TTL 261

VDD

RC R1 VO

DX D1 V1 VX Q0

RB

VY

DY

Figure 11.5: DTL logic: the NAND see figure 11.5. The circuit used in figure 11.5 is nothing more than DL with a GES amplifier. When at the input both vx and vy are equal to the supply voltage neither Dx nor Dy will be conducting. The BJT now receives a base current from the “bias network” R1 - D1 - RB, and will conduct. With the right values for the resistances these input voltages will cause the transistor to be in saturation and the output voltage will be “low”. When either input is equal to ground, for example vx, the diode Dx will conduct. V1 is now equal to the input voltage vx plus the forward voltage lost over diode Dx. Consequently, D1 is no longer conducting and the base current of the BJT will go to 0V due to RB. There will be no current through Rc and the output voltage is “high”. Transistor-transistor logic (TTL) is in fact equal to DTL except the separate diodes are each replaced by a transistor. The reason for this is that in IC technology a BJT with multiple emitters can be realized much more compactly than separate diodes. The advantage of TTL over DTL therefore, is better use of space. In the 70’s of last century TTL was the weapon of choice for realizing digital functions. In the 80’s CMOS was introduced (see the rest of this chapter). Nowadays TTL is only used for circuits that have relatively few ports.

VDD

RC

R1 VO

VX Q0

VY RB

VZ

Figure 11.6: TTL logic: a 3-input NAND (simplified)

2 VDD/Vdiode,on ports if we define the threshold between “1” and “0” near VDD. With the threshold at half the supply voltage we can only cascade VDD/2Vdiode,on ports... 262 CHAPTER 11. DIGITAL CIRCUITS

11.4 NMOS and PMOS logic

11.4.1 From TTL to NMOS

With the introduction of CMOS technology, TTL logic is no longer the preferred way to implement logical functions. We will deal with NMOS logic first. This is logic consisting of ports that have an NMOS transistor as active element. NMOS logic is the predecessor of CMOS logic which uses both NMOS and PMOS transistors, because those are Ccomplementary devices the name CMOS is used. The starting point for discussing NMOS logic is TTL logic where the NPN is replaced by an NMOS transistor. A difference between DTL, TTL and NMOS logic is that multiple inputs in

VDD

VDD

IRD RD

output voltage RD V Vx Vy OUT

VIN

Figure 11.7: a) A 2-input NOR b) A 1-input NOR in NMOS logic bipolar technologies were made using diode logic type circuits. In MOS, diode logic is not the most efficient. A better way is to directly place transistors in parallel3. This directly placing MOS transistors in series or parallel also leads to a lack of necessity for adding a bias network. A two input NOR will then resemble figure 11.7a.

11.4.2 The analog linear amplifier

In fact, a one input NOR is not much different from the GSS as discussed in chapter 5, see figure 11.7b. The only significant difference is that in chapter 5 the circuit was used as a linear amplifier with a certain proper bias and application for small signals.

11.4.3 A digital (saturated) amplifier

For the NOR we established that a threshold voltage exists, that defines a signal to be “high” or “low”. In digital applications the level of the voltage is important, not the shape of the signal. This leads to very different requirements for the circuit than the case of an analog amplifier where for proper behavior small signals are used. We will now analyze the large signal behavior of the circuit as used in digital logic. Suppose that Vin in figure 11.7b is a source that supplies a block signal between ground and VDD.

3In bipolar technologies that would be possible, but in that case would lead to very inefficient solu- tions. 11.4. NMOS AND PMOS LOGIC 263

• If the input voltage is “low”, vGS will (typically) be smaller than the threshold voltage VT and the drain current will be virtually equal to 0. The voltage lost over the resistor is 0V and the output voltage is equal to VDD: “high”.

• If vIN = VDD, a current will run through the transistor which also passes through the resistor. The output voltage will become vOUT = VDD − iD · RD: “low”.

Notice that the output voltage vOUT can never become 0V, because then the drain current of the NMOS transistor would be equal to 0. The resulting voltage follows from a few equations for the “low” state:   W V 2 i = μ · C · · (V − V ) · V − DS D n ox L DD T DS 2

vDS = VDD − RD · iD

nd Solving vOUT = vDS can be done through a straightforward solution of a 2 order equation.

The resulting “low” voltage depends, as expected, on the dimensions of the transistor and the resistance used. The behavior of the inverter can also be derived using a loading line construction, see figure 11.8. The input signal in figure 11.8 has a sinusoidal

0.02

iD 0.01 ID

0 5 4 3 2 1 0012345

vGS VIN v VDSvGS V OUT 0 DD

t t

in

0

DD

v

V

Figure 11.8: Digital switching illustrated by a loading line construction shape (to illustrate that it will become more block-shaped) and is shown in the lower left corner. Notice that the “time axis” is vertical and runs from bottom to top. This input signal is equal to the transistor’s vGS and supplies a (saturation) drain current which can be constructed by mirroring vIN in the vGS − iD curve before saturation. Corresponding to this curve there is a vDS − iD point in saturation (our assumption for the vGS − iD transfer function). The intersection of the corresponding vGS − iD curve with the loading line of VDD = vDS + iD · RD gives the output voltage of the 264 CHAPTER 11. DIGITAL CIRCUITS circuit in both the linear and saturated case! The resulting output signal is given as a function of time in the lower right corner. From this graphic method it is clear that the output signal is a distorted version of the input signal: it becomes more or less block-shaped. Furthermore, it is evident that the output signal cannot become 0V.

11.4.4 Arbitrary functions with NMOS logic As mentioned before, to make an arbitrary function one needs a NAND or a NOR. In NMOS logic a NAND is made by connecting transistors in series: this leads to a boolean AND while the negative amplification of the circuit supplies inversion. In figure 11.9b the schematic for a 3 input NAND in NMOS technology is given. In

VDD VDD

RD

VOUT RD

A VOUT

A B C B

C

Figure 11.9: a) NOR: F = A + B + C b) NAND: F = A · B · C a similar way the NOR is a boolean OR operation using parallel transistors and an inversion. This is shown in figure 11.9a. Designing a circuit for a simple logical function is easy using the boolean rules of DeMorgan (A + B = A · B en A · B = A + B). For example, the logical function

F =(A · B + C · D)=(A · B) · (C · D)=(A + B) · (C + D) (11.1) results in an NMOS logic circuit as shown in figure 11.10. Note that this circuit is a mix of NAND and NOR. 11.4. NMOS AND PMOS LOGIC 265

VDD

RD

VOUT

B D

A C

Figure 11.10: NMOS implementation of equation (11.1)

11.4.5 The PMOS alternative An alternative for NMOS circuits like 11.7a and b, is to use PMOS transistors instead of NMOS ones. This results in a complementary circuit with the same possibilities and problems as NMOS logic. Note that because the voltages of PMOS transistors are complementary to those of NMOS transistors, the circuits are also complementary. This means that a parallel connection of NMOS transistors corresponds to a series connection of PMOS transistors and vice versa. That PMOS logic was not used is because of the lower mobility of holes (the charge carriers in PMOS) in comparison to that of electrons (the charge carriers in NMOS): PMOS logic is simply slower than NMOS logic.

VDD

VIN

VOUT

RD

Figure 11.11: PMOS implementation 266 CHAPTER 11. DIGITAL CIRCUITS

11.5 The current solution: CMOS implementation

The implementations of digital circuits discussed until now (NMOS, PMOS, DTL, TTL) all have a few specific disadvantages:

• The output voltage is not 0 or VDD, but in a smaller range • If the transistors are “on” there is a current which causes power consumption If we now compare NMOS and PMOS logic we see that there is no current in NMOS logic if the output voltage is “high”, while in PMOS logic no current runs in the “low” state. This of course calls for a combination of these advantageous properties. The first step is to stick both implementations together. The resulting circuit can be seen in figure 11.12a.

Combination of disadvantages When we now connect the supply voltage to the input of the port in figure 11.12a, the PMOS transistor will not conduct. The NMOS, however, will conduct and a current will run through the top resistor. When we connect a low voltage to the input the PMOS transistor will conduct and the NMOS will not. Now a current will run through the bottom resistor... In other words the situation is now worse than in a separate PMOS or NMOS implementation, because there will always be a current regardless of the output state: we made a combination of disadvantages. These disadvantages are

VDD VDD VDD VDD

VOUT VOUT

AA

Figure 11.12: a) PMOS and NMOS implementations combined b) different representation caused by the 2 resistances. If the components in the schematic of figure 11.12a are reorganized we get the schematic of figure 11.12b. Now we see immediately that the resistors have no useful function: they constitute an output resistance of the transistors and cause a supply current. This current is disadvantageous!

A combination of advantages Because all the above misery is caused by components that don’t have a useful func- tion, the solution is simple: take out all unnecessary resistors. The resulting logical building blocks are called CMOS logic, where the C stands for “complementary”. A few examples of simple building blocks in CMOS are displayed in figure 11.13. It is clear from these schematics that the number of components in CMOS is larger than in 11.5. THE CURRENT SOLUTION: CMOS IMPLEMENTATION 267

VDD VDD VDD

VDD

VOUT B

B A VOUT VOUT

AA

Figure 11.13: CMOS-inverter, 2 input NAND and 2 input NOR

TTL, NMOS or PMOS logic. Because transistors don’t take up much space, however, CMOS is often still smaller than other implementations. The big advantage of CMOS over other forms of logic is the lack of static power consumption: either the NMOS or 4 the PMOS transistor is “off”, so ideally there will be no current from VDD to ground .

4Murphy’s law is also applicable here. 268 CHAPTER 11. DIGITAL CIRCUITS

11.6 The loading of a port

The output of a logical port will, in most cases, be used to operate the subsequent ports. In CMOS a port is constructed out of MOS transistors which each have an input capacity (see chapter 5). When a port is loaded by another port the second port can be interpreted as a capacitive load on the first. Even when a port is at the output of the digital part the port is loaded, otherwise the port would be useless. Those types of loads are typically capacitive too. In short: the load of a port can oftentimes be modeled as a capacitance.

The charging and discharging of the load

A logical port will give a high or a low output signal depending on the input signals. If the output signal changes, the loading of the port modeled as a capacitor, will have to be charged or discharged. The current required for this is supplied by the port: a current runs from the supply, via the port to the capacitor until the final voltage is reached. If we model a MOS transistor in the “on” state as a (small) resistance and the “off” state as an opened switch, we get schematics for the (dis-)charge in figure 11.14b and 11.14c. The power required to (dis-)charge a capacitive load, regardless of resistance,

VDD VDD

VDD

VOUT VOUT

A VOUT

Figure 11.14: a) inverter b) charging of load c) discharging of load is · 2 · P = Cl VDD f0 (11.2)

Derivation The energy required for charging a capacitor can be derived using, for example, figure 11.14b. If we as- sume the capacitor to be completely discharged initially, the following descriptions hold for the charging 11.6. THE LOADING OF A PORT 269

current, required energy for charging and the total energy in the capacitor: dv i (t)=C · C C dt ∞

Echarge = iC (t) · VDD dt

t=0 ∞

EC = iC (t) · VC dt

t=0 The last two equations can easily be calculated by substituting the first into them in some way. If we explicitly assume an initially uncharged capacitor this yields:

V DD ∞  2 = · dvC · = · = · Echarge C dt VDD dt C VDDdvC C VDD t=0 0 V ∞ C  1 2 E = C · dvC · V dt = C · V dv = C · V C dt C C C 2 DD t=0 0

We can see that when charging a capacitor to a certain voltage, C·V 2 energy is required. Furthermore, the stored energy is 1/2·C ·V 2... so only half of the energy required for charging. We can conclude that half of the energy disappears in the resistor. For the required power that flows from the supply when charging and discharging a capacitor at frequency fswitch it follows (because extra current from the supply VDD is only required while charging):

2 Pcharge = fswitch · C · VDD

It might seem strange that half of the energy is lost, but it is also easy to understand. The charging and discharging of the capacitor is a symmetrical problem: the capacitor is simply discharged to a certain 1 2 voltage. When discharging from VDD to 0V, 2 CVDD must be dissipated, when charging from 0V to 1 (− )2 · 2 VDD it is 2 C VDD : in total C VDD per cycle. Everything is dissipated in the resistance used while (dis-)charging5.

The energy consumption of a digital circuit can be limited by reducing the load capac- itance, the supply voltage or the frequency. We will come back to this subject in the section about choosing the supply voltage. In (11.2) it was assumed that the capacitor was continuously charged or discharged. However, the output signals of digital building blocks will not change at every clock tick, but only at a fraction of the number of clock ticks. Furthermore, the size of Cload depends on the number of ports that is connected to the output of the port. To take these observations into consideration, we expand (11.2) for the (dis-)charging of a load:

· · 2 · P = ξ Cl VDD f0 (11.3)

In this equation, ξ is a factor representing the average capacitive load and the average switching speed.

5If you don”t want this, you will have to do something about the resistance that is used to charge to a certain level. This means doing something about the resistance or the final voltage. The easiest is to charge using something which has no resistance, a coil for example. This will be handled in chapter ??, which is about harmonic oscillators. It is also used in class E and F amplifiers (which are very efficient and popular in RF transmitters), which are not dealt with in this book. 270 CHAPTER 11. DIGITAL CIRCUITS

11.6.1 Comparing power consumption One of the most important properties of digital circuits is their power consumption: the less power consumed per building block, the more can be placed on a cm2 from a temperature perspective. With a higher density of ports, more processing power can be realized.

TTL

When the output voltage of figure 11.6 is high, a current runs via R1 and Q1 into vX , vY or vZ. If the output voltage is low, a base current runs through R1 and Q0. Also, a collector current runs through RC . In a TTL implementation, therefore, current is continuously running. This static power consumption is added to the dynamic con- sumption of (11.3). The total power consumption for TTL, then, is comprised of two parts. First, we will look at the separate parts of the static power consumption. We make the assumption that on average over all ports, the signals are high half of the time and low for the other half. The static power consumption in the part with a “low” output consists of the base current, the current through the resistor RB and the collector current. For a circuit consisting of N ports:

N N VDD − VBE Pbase = · VDD · iB = · VDD · 2 2 R1 2 N VBE PRB = · 2 RB N VDD − V0 PIC = · VDD · 2 RC These three powers together form the static power consumption. The total power dis- sipated in a TTL implementation is the sum of static and dynamic power:   2 · − · − N VBE VDD (VDD VBE) VDD (VDD V0) · · 2 · Ptot = + + + ξ Cl VDD f0 2 RB R1 RC (11.4)

NMOS/PMOS NMOS and PMOS logic does not significantly differ from TTL logic. The big differ- ence regarding power consumption is the lack of input currents. The power dissipation of an NMOS/PMOS implementation is:

N 2 P = · V · (V − V )/R + ξ · C · V · f0 (11.5) tot 2 DD DD DS D l DD

CMOS A difference between the CMOS implementation (see figure 11.12) and the NMOS/PMOS implementation is that in CMOS, ideally, current only flows from supply to ground dur- ing switching (when both NMOS and PMOS transistors conduct). The static power 11.6. THE LOADING OF A PORT 271 consumption is (ideally) 0! For low frequencies this difference is a clear advantage over NMOS/PMOS implementations. For high frequencies this is not always the case. Assuming the circuit is symmetric (βn = βp and VTn = VTp), because of the aforementioned short circuit (the time from t1 to t2 is half of a period), the power consumption of figure 11.13 is:

t2 1 P = · ζ · i(t) · V · dt (11.6) shortcirc 0, 5 · T DD t1 Here, ζ is a factor that represents the number of times different ports switch. The power consumption of the CMOS implementation becomes:

t2 2 2 P = V · ζ · i(t)·dt + ξ · C · V · f0 (11.7) tot T DD l DD t1 In modern CMOS processes there is also a leakage current through transistors that should be in “off” mode; this causes a significant power consumption nowadays. This leakage is caused by weak inversion currents as handled in chapter 5. In short: the transistor is not completely “off” at vGS

11.7 Choosing the supply voltage

11.7.1 Scaling

In the previous text, there is much discussion about high and low signals, depending on the supply voltage. To be a bit more specific, we should fill in numbers. These numbers, however, are not fixed, but change over the years. This is because supply voltage depends on the size of the transistors, which is continuously decreasing. Chip manufacturers try to make their transistors as small as possible. The smaller the transistors, the faster the circuits that can be made (electrons need less time to get from one side of the transistor to the other). Also, the smaller the transistors are, the more can be put on a single chip. This means that more complex functions can be realized. Furthermore, smaller transistors constitute less power consumption. The

Figure 11.15: Moore’s law number of transistors on a chip and the minimal dimensions of transistors have, for a long time, complied to Moore’s law. Based on little data, as can be seen in figure 11.15, Gordon Moore predicted the number of components on an IC, as a function of time, in 1965. Because the whole semiconductor industry wanted to comply to this theory, the prediction became a self-fulfilling prophecy: reality complied to it because everyone tried to match it exactly.

Power consumption on a piece of silicon

The power consumed is given by equation 11.3. Here, C is the input capacitance of a digital port, consisting in CMOS of a number of MOS transistors. If the dimensions of transistors are made smaller with a certain scaling factor s, the input capacitance can be made a factor s smaller. This is true because the length, the width and the oxide 11.7. CHOOSING THE SUPPLY VOLTAGE 273 layer thickness all scale with s, leading to:

W  = W · s L = L · s d = d · s W  · L C = = C · s  · d As derived before, the (active) power dissipation of a port decreases linearly with the load capacitance. Therefore per port, using smaller transistors (s<1), the power consumption will decrease. However, the density of ports can be a factor s−2 higher, leading to an increase in power density with s−1! Additionally, the frequency of the clock signals increases, which according to equation 11.3 leads to a linear increase of the power dissipated. This is why you see old computers working without cooling fins and ventilators. Current computers cannot go without them due to the aforementioned reasons. The clock frequencies nowadays (multiple GHz) lead to power dissipations of 50-80W on a chip surface of ± 2cm2. To compare: a cooking plate in a furnace delivers 6 W/cm2. This is a factor 5 less than the power dissipation of such a CPU! When the cooling system fails, the CPU might look like figure 11.16.

Figure 11.16: A CPU after a cooling failure

11.7.2 Transistor scaling and supply voltage Many things can go wrong when the dimensions of transistors get smaller and smaller. One of the possible problems is reliability. Suppose the dimensions of the transis- tors decrease while the voltages are kept constant. The electrical field strength E = dV/dx [V/m] will increase with decreasing dimensions. This mechanism holds for the vertical field, caused by the voltage over the gate oxide. The gate oxide gets thinner in newer MOS technologies, leading to the same 274 CHAPTER 11. DIGITAL CIRCUITS voltage drop over a thinner oxide. This leads to an increasing field strength in the oxide. At low and average electrical fields the gate oxide is a great insulator, but at high fields the oxide begins to conduct quantum mechanically. The charge carriers that flow into the oxide damage it, and the resulting degeneration of the oxide will eventually cause a sudden breakdown of the oxide. For this reason, the voltage over the oxide cannot be allowed to be too high. The aforementioned increasing E-field mechanism also holds for the lateral field, meaning the field from drain to source. In the MOS transistor, the charge carriers go from source to drain. During their journey, they accelerate due to the electrical field, but they also lose speed regularly due to collisions. If charge carriers with high speed - so-called “hot carriers” - collide with an atom, they will deflect. A small fraction will go in the direction of the gate oxide, where they can cause damage. To prevent damage caused by high vertical and lateral fields, the supply voltage of MOS transistors is lowered when the dimensions of transistors decreases. For a number of CMOS generations (often named after the minimal canal length that can be realized), a few applicable numbers are displayed below. The values stem from the National Technology Roadmap for Semiconductors: a prediction of CMOS technology, similar to Moore’s law, and made by the Semiconductor Industries Association [12].

Year of introduction 1997 1999 2002 2005 2008 2011 2014 Channel length [nm] 200 140 100 70 50 35 25 Supply (V) 1,8- 1,5- 1,2- 0,9- 0,6- 0,5- 0,4 2,5 1,8 1,5 1,2 0,9 0,6 Metal layers 6 6-7 7 7-8 8-9 9 10 Equivalent oxide 3.5 1.9- 1.5- 1.0- 0.9- 0.6- 0.5- thickness [nm] 2.5 1.9 1.5 1.2 0.8 0.6 Max frequency 750 1250 2100 3500 6000 10000 17000 [MHz], Local Max μP power [W] 70 90 130 160 170 175 183

Table 11.2: “Roadmap” (1997) 11.8. SPEED 275

11.8 Speed

At the very beginning of this chapter, it was mentioned that ideal digital circuits are infinitely fast, but that (thankfully) we don”t have ideal components. How boring life would be in that case! In this section some speed aspects of digital circuits are dealt with. When the output signal of a port needs to change, a charge will be delivered to, or retrieved from the load capacitance. In figure 11.17a the input of a CMOS inverter has just become high: only the NMOS transistor is “on” while the output voltage is still “high”. The charge of the load capacitor flows away via the MOS transistor. In the

VDD

A VOUT iD

COUT COUT

a) b)

Figure 11.17: Limited speed of an inverter previous chapters, bandwidth aspects of circuits were discussed using a small signal replacement circuit (for example figure 11.17b), while harmonic signals (meaning sine signals) were assumed. For digital ports, however, we can”t approach the problem in that way: the input signal is not a sine, but a square wave, and it is definitely not a small signal. The first problem can be solved by looking at the separate components that constitute the square wave. When we connect a small signal square wave to the input of the inverter, we may calculate the output by calculating the output result for each separate sine and summing them at the output. The second observation, however, cannot be circumvented; the behavior of the inverter is large signal behavior.

Large signal approximation

Speed aspects of digital ports need to be calculated with large signals. Analytically, it is hard to find an exact expression for large signal parameters. A reasonable ap- proximation can be found, however, if we apply some simplifications (meaning: if we make a model). In the upcoming analysis, a standing MOS transistor is modeled as a constant current source, see figure 11.17b. The time needed to discharge the capacitor, then, is roughly equal to: ∼ VDD Δt = C · (11.8) ID 276 CHAPTER 11. DIGITAL CIRCUITS

The maximum frequency of the inverter can then be found through the large signal approximation: ID fmax < (11.9) 2 · C · VDD If, for simplicity, we assume that the input signal of the logical port changes very quickly from “0” to “1” and back, and if the square law relation for the transistors is assumed during the whole switching time, it follows that:

∼ · VDD Δt = C 1 2 2 K (VDD − VT ) ≈ 2C K (VDD − VT ) K (VDD − VT ) fmax < (11.10) 4 · C From this, it is obvious that the maximum clock frequency strongly depends on (among others) the supply voltage. Because the power dissipation also increases, the result is a compromise between (wanted) speed and (unwanted) power dissipation.

Output signal of a loaded inverter In figure 11.18a, the output signal of an inverter is shown, when it is loaded with a large capacitance. At the input of the inverter there is a square wave which, in the figure, switches between 0V and 2.5V. When the clock frequency is increased, the capacitor will not be completely charged when the input voltage of the inverter goes up again. So it can happen that the output voltage will not go above the threshold for a high signal, or below the threshold for a low signal, as can be seen in figure 11.18b. Whether this occurs depends on the maximum speed of the port, the clock frequency and the capacitive load of the port.

2,5 2,5

VHIGH

vOUT vOUT

VLOW 0 0 tt a) b)

Figure 11.18: Output voltage of a loaded inverter a) normal b) speed too high 11.8. SPEED 277

Signal path length It has been shown that the speed of a logical port is not infinitely high. Due to this, the delay that occurs in a signal path is dependent on the number of ports, that are in the path. It follows that when two different signal paths have the same input signals and eventually join back together, there may be a time difference between both outputs.

path 1

DD+T 22DD+T 3DD3 +T 44DD+T T+DDT+3 0 T

DD+T

path 2

Figure 11.19: Time difference in signal path

As an example we take the circuit in figure 11.19. In the top signal path, there are 4 ports, and in the bottom path there is 1. Now suppose that each port introduces a delay of Δt =10ps. The output of the top path will then reach the correct output voltage 3Δt =30ps later than the bottom one. It can be easily checked that the circuit in figure 11.19 will make errors: the output of the AND-port should stay low at all times, but it will give off a pulse now and then. This unwanted pulse is called a glitch.

11.8.1 Definitions of parameters In a digital signal, a number of parameters can be distinguished which determine its characteristics. We take the rise time, fall time, and the propagation delay. The defini- tions are as follows:

• The rise time (tr see figure 11.20a) is a measure for the speed at which a logical signal makes the transition from low to high; it is defined as the time needed to get from 10(%) to 90(%) of the end level.

• The fall time (tf , see figure 11.20a) is a measure for the speed at which a logical signal makes the transition from high to low; it is defined as the time needed to go from 90(%) to 10(%) of the starting level. • The propagation delay is a measure for the speed at which the output of a port follows its input; it is defined as the time which elapses between the moment that the input signal is at 50(%) of its value and the moment that the output voltage is at 50(%) of its value.

The propagation delay for a rising output signal (0 → 1) tdr can differ from the propagation delay of a falling signal (1 → 0) tdf . This is caused (among others) by a difference in the capacities of an NMOS transistor (which usually takes care of the transition 1 → 0) and that of a PMOS transistor (ditto for 0 → 1). 278 CHAPTER 11. DIGITAL CIRCUITS

v V V in

tdr tdf 50% 50%

t t V V 90% Vuit

50%

10% t t td td tr tf r f

Figure 11.20: Rise time, fall time and propagation delay

11.9 The latch

With the circuits that were discussed so far, the input signal has a direct influence on the output signal. As we saw in the previous section, this can cause errors when signal paths have different propagation delays. Often it is advantageous to only let the input directly influence the output at certain times. After that the output signal is held constant for a certain time, independent of what the input signal does during that time. Examples of this are all logical ports, which work with clock signals.

11.9.1 Signal retention To “hold” a signal, it needs to be memorized for a short time. This can be done in many ways. The simplest way is charge storage on a capacitor: the voltage doesn’t change as long as there is no current. This is the principle of DRAM (which does leak, so it needs to be refreshed), EPROM (no significant leakage), ... In digital logic,

A

VIN VOUT VIN VOUT

Figure 11.21: Stable condition for a=1 and b=0 the circuit in figure 11.21b is typically used: a non-ideal amplifier with feedback and amplification larger than 1. A given signal will appear, amplified, on in input. If the large signal transfer function is something like:

vOUT = fsaturating (vin) 11.9. THE LATCH 279 where

• the saturating function fsaturating gets “stuck” at 0V or at VDD ∼ • fsaturating(VDD/2) = VDD/2 then a “low” signal will appear amplified at the output, and stay there. Something analog holds for a “high” signal. The principle of figure 11.21b is further elaborated on in figure 11.22. Such a circuit is called a latch (flip-flop) and also has applications in SRAM for memorization of bits. When the signal a was high at time t0, the right NMOS transistor conducts and b becomes low. This means that the left PMOS transistor conducts and a stays high. However, if a was low at t0, the right PMOS transistor conducts and b becomes high. When b is high, the left NMOS transistor conducts and a stays low. Note that the circuit from figure 11.22c consists, in fact, of two cross-coupled inverters. Applications of

input output input output

a) b)

Vdd Vdd

clock

b

clock a clock output input

clock

c) d)

Figure 11.22: Latch implementations with and without switches the latch are: elimination of signal path length problems by synchronizing, and use in volatile static memory (as long as a voltage is delivered, the information stays in memory). To store signals at a certain time, switches are included. 280 CHAPTER 11. DIGITAL CIRCUITS Bibliography

[1] E.L. Norton, “Design of Finite Networks for Uniform Frequency Characteristic”, Technical Report TM2601860, Bell Laboratories, november 1926

[2] L. Th´evenin, “Extension de la loi dOhm aux circuitselectromoteurs ´ complexes”, Annales T´el´egraphiques (Troisieme s´erie), vol. 10, pp. 222224, 1883

[3] H. Helmholtz, ”Uber¨ einige Gesetze der Vertheilung elektrischer Str¨ome in k¨orperlichen Leitern mit Anwendung auf die thierisch-elektrischen Versuche”, Annalen der Physik und Chemie, vol. 89, no. 6, pages 211233, 1853

[4] L. Euler, “Introductio in Analysin Infinitorum”, 1748

[5] J. Fourier, “Th´eorie analytique de la chaleur”, Firmin Didot P`ere et Fils, Paris, 1822

[6] J.W. Nilsson and S.A. Riedel, ”Electric Circuits”, Prentice-Hall: 2000 Although this book presents and demonstrates a lot using unrealistic component values (1Ω, 1 A, 1 V, 1 H, 1 F), this book is a good introduction into many aspects of electronics and basic analysis methods.

[7] R.F. Pierret, “Modular Series on Solid State Devices” deel; 1-4, Read- ing MA: Addison-Wesley Publishing Company, 1990, ISBN 0201122987, ISBN0201122952, ISBN 0201122960, ISBN 0201122979 A nice series on semiconductor physics, diodes, bipolar transistors and MOS transistors.

[8] S.M. Sze en K.K. Ng, “Physics of Semiconductor Devices”, New York: Wiley, 2007, ISBN 978-0-471-14323-9 A well known and good book on semiconductor physics, diodes, bipolar transis- tors and MOS transistors and more.

[9] W.F. Brinkman, D.E. Haggan, en W.W. Troutman, “A History of the Invention of the Transistor and Where It Will Lead Us”, IEEE J. Solid-State Circuits, vol. 32, December 1997, pp. 1858-1864

[10] Y. Tsividis, “Operation and Modeling of The MOS Transistor”, Boston: McGraw-Hill, 1999, ISBN 0070655235 This is one of the books on MOS transistors.

281 282 BIBLIOGRAPHY

[11] J.E. Lillienfield, “Method and Aparatus for Controlling Electric Currents”, US patent 1745175, 1930 Lillienfield’s patent on the invention of the MOS transistor, see also e.g. http://nl.espacenet.com.

[12] International Technology Roadmap for Semiconductors, available http://www.itrs.net

[13] S. Hong, “A History of the Regeneration Circuit: From Invention to Patent Litigation”, available: http://www.ieee.org/portal/cms docs iportals/- iportals/aboutus/history center/conferences/che2004/Hong.pdf On the patent dispute between Lee de Forrest and Edwin Armstrong.

[14] T. Lee, “The Design of CMOS Radio-Frequency Integrated Circuits”, Cam- bridge: Cambridge University Press, 1998, ISBN 052163922 A good and nice to read book on RF-electronics.

[15] S. Adams, “The Joy of Work: Dilbert’s Guide to Finding Happiness at the Expense of Your Co-workers”, New York: Harper Business, 1998, ISBN 0887308716

[16] J.M. Miller, “Dependence of the input impedance of a three-electrode vacuum tube upon the load in the plate circuit”, Scientific Papers of the Bureau of Stan- dards, vol. 15, no. 351, pp. 367-385, 1920

[17] R.P. Sallen and E.L. Key. “A Practical Method of Designing RC Active Filters”, IRE Transactions on Circuit Theory, Vol. CT-2, pp. 74-85, 1955

[18] Balanis, “Antenna Theory: Analysis and Design”, John Wiley: 1997 Presents about everything you would like to know about antennae.

[19] available http://www.amanogawa.com Applet to play with waves and antennas.

[20] S.C. Cripps, ”RF Power Amplifiers for Wireless Communications”, Artech House: 1999 A good book on power maximization in RF transmitter systems.

[21] R. Schmitt, “Understanding electromagnetic fields and antenna radiation takes (almost) no math”, EDN, maart 2000, pp. 77-88 Index

actuators, 39 Clapp oscillator, 186 amplifier class inverting, 149 A, 223 maximum power transfer, 253 AB, 225 non-inverting, 146 B, 224 virtual ground point, 152 D, 225 analysis method E,F,G,H,..., 226 brute force approach, 22 clipper, 51 mesh analysis, 21 CMOS inverter, 266 nodal analysis, 21 Colpitts oscillator, 184 antenna, 240 coupling capacitor, 96 bandwidth, 251 crystal oscillator, 188 dipole, 248 current mirror, 116 directivity, 251 current source, 115 gain, 251 Differential equations, 20 monopole, 250 differential pair, 193, 198 bias circuits, 73 differential stage, 193 bias point, 74 differentiator, 155 biasing diode DC-voltage source, 77 diode logics, 45 using source degeneration, 84 element equation, 41 with base current (ideal), 77 model with base current (not ideal), 78 forward voltage, 43 with emitter degeneration, 80 ideal, 41 serial resistance, 44 with feedback, 80 dipole antenna, 248 with voltage divider, 83 impedance, 248 BJT reactance, 248 model resistance, 248 linear, 90 dominant cutoff point, 142 SSEC, 90 dominant first-order characteristics, 142 Bode plot, 26, 140 efficiency cascade, 55 of class A, 224 CBC, 106 of class AB, 225 CCC, 109 of class B, 224 CDC, 111 of class D, 225 CGC, 108 clamp, 52 feedback, 120

283 284 INDEX

negative feedback, 120 Kirchhoff current law, 238 positive feedback, 120 Kirchhoff voltage law, 238 filters, 159 relation with Kirchhoff, 237 Fourier transformation, 254 Miller-effect, 143 errors with, 254 model fft, 254 transistor Fourier transformations, 19 linear, 88 monopole antenna, 250 gain margin, 135, 141 impedance, 250 Graetz reactance, 250 bridge, 50 resistance, 250 Moore’s Law, 272 harmonic oscillator, 165 NTRS, 274 Hartley oscillator, 187 roadmap, 274 impedance matching, 253 MOST input stage, 193 in strong inversion, 68 integrator, 154 layout, 65 intermediate stage, 209 operating modes, 67 threshold voltage, 68 Kirchhoff, 14 relation with Maxwell, 237 negative feedback influence on latch, 278 amplifiers, 124 LDR, 37 bandwidth, 124 limiter, 51 interference and noise, 126 log-converter, 160 nonlinear distortion, 126 logic NTC, 176 capacitive effects, 268 Nyquist-criterium, 129 CMOS, 266 examples, 131 power usage, 271 Nyquist-plot, 129 diode, 260 NMOS, 262 op-amp, 145 power usage, 270 current summator, 156 PMOS, 265 differential amplifier, 158 power differential pair, 193, 198 dynamic, 268 differentiator, 155 static, 270 filters, 159 truth table, 258 general schematic of, 192 TTL, 261 ideal model of, 145 power usage, 270 integrator, 154 logics intermediate stage, 209 diode logics, 45 inverting, 149 linear application of, 146 maximum power transfer, 253 non-idealities of, 162 Maxwell non-inverting, 146 relation with Kirchhoff, 237 non-linear application of, 160 Maxwell equations, 237 output stage, 215 INDEX 285

slew rate, 162, 231 summator virtual ground point, 152 current, 156 voltage subtractor, 158 potentials, 157 voltage summator, 157 oscillator threshold voltage, 68 Clapp, 186 transconductance, 92 Colpitts, 184 transducers, 39 crystal, 188 transfer functions, 24 harmonic, 165 transmission system, 252 amplitude clipping of, 176 transmitter, 234 Hartley, 187 transmitter power output, 234 phase-shift, 172 upsweep, 139 Pierce, 189 RC, 172 virtual ground point, 152 single transistor, 178 voltage coupling, 114 startup conditions, 175 voltage source, 115 output resistance, 107 3 methods, 107 Wien output stage, 215 bridge, 170 class A, 224 AB, 225 B, 225 phase margin, 135, 141 polar figure, 129 positive feedback, 139 PTC, 176 quality factor Q, 24, 168, 188 rectifier, 47 full wave, 50 half-wave, 47 sensors, 39 simplifying relations, 30 slew rate, 162 small-signal output resistance, 107 3 methods, 107 SSEC, 87 BJT, 90 MOST, 92 stability marginal, 135 see Nyquist, 129 start-up mess, 97 subtraction of potentials, 158