Module 3: Electronics
2012-2013 Anne-Johan Annema
Translation: Yoeri Bruinsma Lian Xi
UNIVERSITEIT TWENTE. Contents
0 Introduction 7 0.1 Electronics ...... 7 0.2 Electronic systems ...... 8 0.3 A general electronic system ...... 8 0.4 Structure of the book ...... 11 0.5 Preparatory knowledge for this book ...... 13 0.5.1 Notation ...... 13 0.5.2 Linear components ...... 13 0.5.3 Independent sources ...... 14 0.5.4 Controlled or independent sources ...... 14 0.5.5 Kirchhoff’s current and voltage laws ...... 14 0.5.6 Superposition...... 15 0.5.7 Th´evenin and Norton equivalents ...... 17 0.5.8 Linear networks and signals ...... 18 0.5.9 Fourier transformations ...... 19 0.5.10 Differential equations ...... 20 0.5.11 Circuit analysis methods ...... 21 0.5.12 Transfer functions ...... 24 0.5.13 Bode plots ...... 26 0.5.14 Calculations & mathematics ...... 28 0.5.15 Simplifying relations ...... 30 0.6 Solving exercises ...... 31 0.6.1 Verification using the answer manual ...... 33 0.7 And finally...... 34
1 Models 35 1.1 Components ...... 35 1.2 Analysing and modelling circuits ...... 40 1.3 Ideal diode model ...... 41 1.4 Diode models and time-independent circuits ...... 45 1.5 Diode models and time-dependent circuits ...... 47
2 Summary of semiconductor physics 57 2.1 Introduction ...... 57 2.2 Semiconductors ...... 58 2.3 Diodes ...... 61
1 2 CONTENTS
2.4 Bipolar junction transistors (BJTs) ...... 62 2.5 MOS-transistors ...... 65 2.5.1 MOS-transistor in strong inversion ...... 68 2.5.2 MOS-transistor in strong inversion: summary ...... 69 2.5.3 MOS-transistor symbols ...... 71
3 Bias circuits 73 3.1 Introduction ...... 73 3.1.1 Biasing a transistor: the bias point ...... 73 3.1.2 Biasing a transistor: requirements for its bias point ...... 75 3.1.3 Biasing a transistor ...... 75 3.2 Biasing a BJT ...... 77 3.3 Biasing a MOS-transistor ...... 83
4 Small-signal equivalent circuits 87 4.1 Introduction ...... 87 4.2 Linear model for transistors ...... 88 4.2.1 SSEC of a BJT ...... 90 4.2.2 SSEC of a MOS transistor ...... 92 4.2.3 Small-signal parameters ...... 94 4.3 Amplifier circuits ...... 96 4.3.1 Coupling the input and output ...... 96 4.3.2 SSEC of a basic amplifier circuit ...... 98
5 Amplifier circuits 103 5.1 Introduction ...... 103 5.1.1 The common-base circuit, CBC ...... 106 5.1.2 The common-gate circuit, CGC ...... 108 5.1.3 The common-collector circuit, CCC ...... 109 5.1.4 The common-drain circuit, CDC ...... 111 5.1.5 CEC, CBC, CCC, CSC, CGC and CDC: a comparison .... 113 5.2 Cascade of multiple amplifiers ...... 114 5.2.1 Voltage source ...... 115 5.2.2 Current source ...... 115 5.2.3 Current mirror ...... 116
6 Feedback 119 6.1 Introduction ...... 119 6.2 Negative feedback ...... 120 6.2.1 Full negative feedback: a first concept ...... 121 6.2.2 Partial negative feedback: a generalised concept ...... 123 6.3 Negative feedback and amplifiers: some examples ...... 124 6.3.1 Effect of negative feedback on bandwidth ...... 124 6.3.2 Effect of negative feedback on interference and noise ..... 126 6.3.3 Effect of negative feedback on nonlinear distortion ...... 126 6.4 Stability ...... 127 6.4.1 Rough classification of systems with feedback ...... 128 CONTENTS 3
6.4.2 Stability of systems with negative feedback ...... 129 6.4.3 Stable and unstable: now what? ...... 129 6.4.4 Stability of systems with feedback: examples ...... 131 6.4.5 Phase and gain margin ...... 135 6.4.6 Positive feedback: peaking ...... 139 6.4.7 The Bode plot as tool for presentation ...... 140 6.5 Feedback and dominant first-order behavior ...... 142 6.5.1 Creating dominant first-order behavior ...... 143
7 The op-amp and negative feedback 145 7.1 Introduction ...... 145 7.2 Linear applications ...... 146 7.2.1 Non-inverting voltage amplifier ...... 146 7.2.2 Inverting voltage amplifier ...... 149 7.2.3 Virtual ground ...... 152 7.2.4 The integrator ...... 154 7.2.5 The differentiator ...... 155 7.2.6 Summation of currents ...... 156 7.2.7 Summation of voltages ...... 157 7.2.8 Subtraction of voltages ...... 158 7.2.9 Filters ...... 159 7.3 Feedback with non-linear elements ...... 160 7.3.1 Logarithmic conversion ...... 160 7.3.2 Exponential converters ...... 161 7.4 Op-amp non-idealities ...... 162 7.4.1 Frequency-dependent gain ...... 162 7.4.2 First-order behavior and slew rate ...... 162
8 Positive feedback: oscillators 165 8.1 Harmonic oscillators with a low Q ...... 168 8.1.1 General introduction ...... 169 8.1.2 Wien bridge oscillator ...... 170 8.1.3 Phase-shift oscillators ...... 172 8.1.4 Startup conditions ...... 175 8.2 Harmonic oscillators with higher Q ...... 178 8.2.1 Single transistor oscillators ...... 178 8.2.2 Crystal oscillators ...... 188
9 Basic internal circuits for op amps 191 9.1 Introduction ...... 191 9.2 The input stage ...... 193 9.2.1 Symmetry requirement ...... 194 9.2.2 First implementation: large signal behaviour ...... 194 9.2.3 Second (or actual) implementation: large signal behaviour . . 196 9.2.4 Small signal behaviour ...... 199 9.2.5 Small signal behaviour with a non-ideal current source .... 201 9.3 From input stage to intermediate stage ...... 205 4 CONTENTS
9.4 Intermediate stages ...... 209 9.5 Output stages ...... 215 9.5.1 Requirements for the output stage ...... 215 9.5.2 Simple output stages ...... 216 9.5.3 Slightly less simple output stages ...... 219 9.5.4 Power efficiency aspects of output stages ...... 223 9.6 Frequency dependencies ...... 227 9.6.1 Bandwidth limitations: small signal ...... 227 9.6.2 Bandwidth limitations: large signal ...... 231
10 Introduction to RF electronics 233 10.1 Introduction ...... 233 10.2 Transmitting and receiving ...... 234 10.3 Maxwell ...... 237 10.3.1 Maxwell and Kirchhoff ...... 237 10.4 Introduction to antennae ...... 240 10.5 Dipole antennae ...... 248 10.6 Monopole antennae ...... 250 10.7 Other antenna characteristics ...... 251 10.8 A transmission system, a bit more exact ...... 252 10.9 In addition ...... 252 10.9.1 Impedance matching and maximum power transfer ...... 253 10.9.2 Fourier transformations, FFT and more ...... 254
11 Digital Circuits 257 11.1 Introduction ...... 257 11.2 Designing logical building blocks ...... 258 11.2.1 Basic logic ports ...... 258 11.2.2 The relation between “high” (digital) and “high” (analog) . . 258 11.3 Old solutions: DL, DTL and TTL ...... 260 11.3.1 Diode logic ...... 260 11.3.2 Diode-transistor and transistor-transistor logic ...... 260 11.4 NMOS and PMOS logic ...... 262 11.4.1 From TTL to NMOS ...... 262 11.4.2 The analog linear amplifier ...... 262 11.4.3 A digital (saturated) amplifier ...... 262 11.4.4 Arbitrary functions with NMOS logic ...... 264 11.4.5 The PMOS alternative ...... 265 11.5 The current solution: CMOS implementation ...... 266 11.6 The loading of a port ...... 268 11.6.1 Comparing power consumption ...... 270 11.7 Choosing the supply voltage ...... 272 11.7.1 Scaling ...... 272 11.7.2 Transistor scaling and supply voltage ...... 273 11.8 Speed ...... 275 11.8.1 Definitions of parameters ...... 277 11.9 The latch ...... 278 CONTENTS 5
11.9.1 Signal retention ...... 278
Bibliography 281
Index 283 6 CONTENTS Chapter 0
Introduction
0.1 Electronics
In our everyday life, we are being surrounded by more and more electronic devices, containing more and more electronic circuitry. Because of this increase of electronics in existing devices, the applicability and usability increases drastically. Examples of existing devices which gained more functions are:
• TVs with wireless Dolby Surround sound and digital video enhancement
• radios with multi-channel audio processors and RDS
• computers with increasing calculation speeds
• car-electronics for controlling the engine, suspension, brakes, air conditioning, ABS, ESP and more
• cellular phones becoming pocket computers
• wireless Local-Area Network
• electronic control of more devices
• voice-controlled devices
Furthermore, there is an increasing number of “new” applications due to the new pos- sibilities provided by electronics. Very soon, electronic devices will have much more options than they have today. The new options will overshadow the current “new” options, making them look old and outdated1: What all these new applications have in common, is the fact that the computational power of electronics increases, while they become cheaper at the same time. Most of the “new” applications of electronics either enlarge the possibilities of the existing devices or increase their ease of use.
1Examples of the past 25 years are, among others, wireless telephones (GSM introduced in 1992...), the PC (introduced in 1982 with 4,77 MHz 8 bit CPU...), the CD (introduced in 1983), electronic motor- management, magnetic bankcards...
7 8 CHAPTER 0. INTRODUCTION
0.2 Electronic systems
In general, electronic systems process information “picked up” from the physical world (using a sensor) which transforms the physical information to the electronic domain. The physical information can be almost anything:
• temperature (your room, a combustion engine, a CPU, ...)
• light (fiber-optic connections, optical data-readers, CD, DVD, motion sensors, spectroscopy, ...)
• pressure (a switch, sound, weight, ..)
• electromagnetic waves (radio, GSM, ...)
• magnetic fields (like in conventional hard drives)
After the transition to the electrical domain, the electronics can process the informa- tion. After processing, the final electronic signal is converted into something physical, usually by an actuator. The physical quantity can, again, be anything: temperature, fan speed, light, sound, radiation, magnetic fields on a hard drive and many others. .
0.3 A general electronic system
A representation of an advanced electronic system is shown in figure 0.1. Clearly recognizable are the input and output signals of the system. These can be the signals from sensors and to actuators. Another “input signal” needed is the power supply voltage2. Since the input and output signals are to and from the (analog) physical world, they are by nature analog. The analog character of audio, video and radio signals is clear. There are, how- ever, many analog signals which send binary data. Examples of this are the optical signal readers (for CD, DVD and optical LAN) and high speed binary data connec- tions, mostly found on PCs (USB2, USB3, firewire, AGP and PCI busses). Hence, analog signal-processing circuits are needed for the transformation to binary signals. In an advanced electronic system, many control functions and signal processing functions are performed in the digital domain. A number of functions are schemati- cally represented in the centre of figure 0.1. Two general representations of electronic systems are shown in figure 0.2. The difference between both systems is artificial: the top system is an analog electronic system, while the lower one is a mix between an analog and digital electronic system. Both systems are electronic, and both systems have its analog electrical input and output signals.
2Since the power supply voltage usually does not contain any information, it is not considered a “signal”. 0.3. A GENERAL ELECTRONIC SYSTEM 9
antenna, antenna memory RF AMP video, audio, …
audio, video, audio, video,… ADC CPU DAC actuator
high speed data, data I/O CLK external clock DSP
Power Management
external supply
Figure 0.1: Block schematic representation of an advanced electronic system
analog analog electrical electrical input signal output signal analog
analog analog electrical electrical input signal output signal analog DIGITAL analog
Figure 0.2: Block schematic representation of a general electronic system
Electronic systems can, in general, be subdivided in different electronic functions. These contain, among others:
• The amplification of input signals for further processing or to control an actuator (amplify).
For example, a radio or television receiver has an input, delivered by the cable, in the order of 10−5 W (10 μW), while controlling the display can easily take tens of Watts. The ability of a vacuum cleaner to move air is also in the order of tens of Watts, while flipping the switch takes about 10−5 W (10 μW). The power which is used to warm the air, is usually in the order of 1 kW.
• The analog manipulation of signals (filtering, mixing, ..)
In radio equipment, including mobile phones, signals are transmitted which are modulated on high frequencies. For example, when receiving such sig- nals, the amplified signal from the antenna is mixed to a lower frequency and then filtered. These analog operations take much less power (in the order of 10mW) than directly digitalising it using a high frequency AD- converter and mixing and filtering it digitally (power consumption is in the order of 1 kW). 10 CHAPTER 0. INTRODUCTION
• The digital manipulation of signals (filtering, editing, ...)
Many manipulations of signals are currently done in the digital domain. It can be proven that for signals which require a reasonable amount of accu- racy, it can be more efficient (with respect to power usage) to perform the manipulations digitally. Furthermore, digital manipulation allows for rel- atively easy signal processing which cannot easily be done in the analog domain. Digital processing can be done flexibly and reasonably inefficient (in terms of power usage) using a general CPU and custom software, or inflexibly and efficient on very specific data processors.
• Transformation of signals (AD and DA conversion, information extraction)
Extensive signal editing is usually done digitally. To do so, the analog sig- nals have to be transformed to the digital domain. Furthermore, something useful has to be done with the obtained signal, which means that it has to be transformed back to some analog quantity, using DA conversion. Also in detecting binary input signals, from for instance CDs or cable-connections, we need the familiar analog to digital conversion.
• The storage of signals (memory)
The blocks which perform an analog or digital “manipulation” can contain a num- ber of functions. For most analog and digital functions (a function in this context is an abstraction of ‘that what the system does physically’), it holds that the output signal has a higher power than the input signal. In addition to that, performing the manipulations themselves also takes energy. Obviously, the extra amount of power at the output and the energy used to perform the manipulations has to come from somewhere. In order to keep the process going the circuit has to be “fed”. The circuit of figure 0.2 has to be replaced by figure 0.3.
analog analog electrical electrical input signal output signal power analog supply
analog analog electrical electrical input signal output signal power analog analog DIGITAL supply
Figure 0.3: Block schematic representation of an electronic system with supply voltage
The power supply is usually a DC voltage. In many devices this DC voltage is provided by batteries. When larger amounts of power are needed, we usually reside to wall socket AC power. This alternating current then first has to be transformed to a nice DC voltage. 0.4. STRUCTURE OF THE BOOK 11
0.4 Structure of the book
Figure 0.4 shows the general structure of this book.
models and modelling, dealing with non-linear components
(chapter 1)
semiconductor components intro basic amplifier circuits - diodes - bipolar transistors - operation - MOS transistors - biasing
(chapter 2) (chapter 3)
small signal equivalent circuits
(chapter 4)
amplifying circuits
(chapter 5)
feedback
(chapter 6)
harmonic oscillators amplifiers and negative feedback digital
(chapter 8) (chapter 7) (chapter 11)
introduction to RF-electronics basic internal circuits and transmitters for opamps
(chapter 10) (chapter 9)
Figure 0.4: Structure of this book
Chapter 1 covers models and modelling, or simplifications of reality, which are used to describe more complex, non-linear problems in an easy manner. Chapter 2 presents a short recap of semiconductor physics: the very basics of the electronic components used in this book: diode, bipolar transistors and MOS transistors. These semiconduct- ing devices are intrinsically non-linear. Although non-linearity increases calculation difficulty, you need non-linear components in any sensible electronic circuit of system: (power)amplification fundamentally requires non-linear components. Basic amplifying circuits using bipolar junction transistors (BJTs) and MOS tran- sistors are introduced in chapter 3. Chapters 4 and 5 deal with modelling of these basic amplifying circuits: (large-signal)settings, small-signal equivalent circuits and more complex circuits are covered. Feedback around amplifying circuits is a very powerful tool that can be used to improve specific characteristics or to suppress unwanted effects. Feedback will be covered extensively in chapter 6. Issues such as stability and improvement of de- sired characteristics are covered. Elaborations and implementations of feedback are presented in chapter 7 for stable systems, and in chapter 8 for oscillating systems. 12 CHAPTER 0. INTRODUCTION
Finally, specific RF-issues such as transmission, antennas, reflections and other inter- esting matters are covered in chapter 9. For readability reasons,
Important conclusions are framed
Background information, or extra information is usually printed using a somewhat smaller font on a gray background.
Examples are also shown on a gray background, but with normal sized font.
An exception to this rule is all content in the remainder of this chapter, dealing with preparatory knowledge. This could have been typeset as background information but since it is considered essential, we have chosen to do not. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 13
0.5 Preparatory knowledge for this book
It is assumed that the reader has some basic knowledge of electronic circuit analysis and of mathematics. Below, a short recap is presented.
0.5.1 Notation This book uses a consistent notation for components and signals: see the table below.
Notation What it is Expression R a resistor uR/iR C a capacitor QC /uC L an inductor ΦL/iL Z an impedance can be anything r a resistance ∂uR/∂iR c a capacitance ∂QC /∂uC l an inductance ∂ΦL/∂iL vX the total voltage in node X VX DC-voltage in node X vX vx voltage variation in node X vX − vX Vx amplitude of the voltage variation in node X vˆx f signal frequency in [Hz] ω angular signal frequency in [rad/s]
0.5.2 Linear components Simple electronic networks are built up from linear components; the element equations and impedance of these are listed below.
Component Value u − i-relation Impedance Unit u u resistor R = i i = R ZR = R Ω (Ohm) Q · ∂u 1 capacitor C = u i = C ∂t ZC = jωC F (Farad) Φ · ∂i inductor L = i u = L ∂t ZL = jωL H (Henry) The symbols for the components above, as used in this book, are presented in figure 0.5a to c. Often a general impedance is used, rather than an impedance specifically for capacitors, inductors or resistors. In that case, the symbol for a resistor is used with a notation which illustrates the fact that it is an impedance: ZC , ZR, ZL or Zx.
+) +) Vv i -) -) a) b) c) d) e) f)
Figure 0.5: Linear components: a) a resistor or impedance, b) a capacitor, c) an inductor, d) a DC-voltage source, e) a voltage source and f) a current source. 14 CHAPTER 0. INTRODUCTION
0.5.3 Independent sources There are two basic types of independent sources: an independent voltage source and an independent current source. Usually, the term “independent” is dropped for sim- plicity. The voltage source forces a voltage difference across its connectors, independent of the current that will flow due to that voltage. Hence, a voltage source can either deliver or dissipate energy. In this book, we will encounter two different independent voltage sources: the DC-voltage source (shown in figure 0.5d) and the general voltage source (figure 0.5e). The current source, shown in 0.5f, forces, as the name suggests, a current through its nodes. No matter what. This book does not make any symbolic distinction between various current sources, DC, AC, independent or controlled.
0.5.4 Controlled or independent sources Circuits with amplifying components are usually modelled using controlled sources. We already know the controlled voltage and current sources, shown symbolically in figures 0.5e and f. The value of a source shows whether it is controlled or independent: a value IA corresponds with a DC-current source, while a value like s·vin corresponds to a controlled current source.
0.5.5 Kirchhoff’s current and voltage laws Kirchhoff’s voltage law (KVL) and Kirchhoff’s current law (KCL), formulated in 1845 by Gustav Kirchhoff, give elementary relations for electronic circuits3. The laws state, in short, that the total voltage drop in any mesh equals 0 V, and that no current can appear or disappear from nodes: mesh vn =0and node in =0. In essence, the current and voltage laws are nothing more or less than the two most basic laws of (simple) physics: the laws of conservation of matter and conservation of energy. As a short explanation: if you apply the law of conservation of matter to the par- ticles we call electrons, you obtain the current law of Kirchhoff: electrons do not disappear or appear at random and hence the summed current into any node is zero. Furthermore, electrons have some level of energy, which is expressed in electronvolts [eV]. In electronics, we usually work with a large number of electrons (a Coulomb), which results in the unit of Volt [V]. Since electrons do not (dis)appear at random and energy does not either, the voltage drop in any mesh must equal 0 V.
3These laws are valid if there is no electromagnetic coupling into or going out of the circuit. Taking electromagnetic effects into account was done later by Maxwell, which is nowadays very relevant for RF-electronics and EMC-problems. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 15
0.5.6 Superposition In any circuit, the voltage on a node (or the current in a branch) is resulting from the contribution of all sources in that circuit. However, calculating the voltage at some node in a circuit due to all sources simultaneously can be a lot of work. With linear circuits, a voltage or current can be calculated much less cumbersome by calculating the contribution of every independent source separately and finally sum- ming all these contributions. This method is called superposition; it is one of the most powerful tools available for linear circuit analysis. The underlying idea is that a com- plex problem is separated into small problems in a very efficient way4. A good example of a circuit that can be easily analyzed using superposition, but very difficult without superposition, is the R-2R-ladder circuit, shown in figure 0.6.
RRR +)
2R 2R 2R 2R 2R vOUT
v1 v2 v3 v4
-)
Figure 0.6: An R-2R-ladder circuit: an example where superposition is extremely useful.
The output voltage as a function of the four independent sources is easily obtained if we calculate the separate contributions of all the independent sources. For the given circuit, we would have to do this four times, using the circuits presented in figure 0.7a-d. From this follows: 1 v (v1)= · v1 OUT 16 1 v (v2)= · v2 OUT 8 1 v (v3)= · v3 OUT 4 2R 1 v (v4)= · v4 = · v4 OUT 2R +2R 2 v4 v3 v2 v1 v = + + + OUT 2 4 8 16 Verifying this can be very easy; the simplest way is to simplify the circuit stepwise. Example: the circuit of figure 0.7d is simplified if we take the left (2R//2R) and replace it by a single R, and then replace (R+R) by a single 2R. This results in the circuit in figure 0.7g. For the replacement circuits of figures 0.7b and c, the same can be done, which results in figures 0.7e and f. Using this example, we clearly show that a ”divide and conquer”-strategy results in many possible simplifications, ultimately reducing the amount of cumbersome calcu- lations.
4This principle was already invented by Philippus of Macedonia, around 350 BC, by the motto ”divide et impera”, although it was not applied to electronic circuits at the time. 16 CHAPTER 0. INTRODUCTION
RRRRRR +) +)
2R2R2R2R 2R 2R 2R 2R 2R 2R vOUT vOUT
v1 v2 v3 v4 v1 v2 v3 v4
-) -) a) b)
RRRRRR +) +)
2R2R2R2R 2R 2R 2R 2R 2R 2R vOUT vOUT
v1 v2 v3 v4 v1 v2 v3 v4
-) -) c) d)
RR R +) +) +)
2R 2R 2R 2R 2R 2R 2R 2R 2R vOUT vOUT vOUT
v2 V3 v4
-) -) -) e) f) g)
Figure 0.7: An R-2R-ladder circuit: separated in 4 pieces, a, b, c and d. The last 3, a simplified version suffices in respectively e, f and g.
Advanced superposition In most textbooks, superposition is formulated only for independent sources and it may appear that it does not hold for dependent (controlled) sources or for circuits that contain dependent sources. This is wrong! In the analysis of circuits, you can calculate the contribution of any dependent source exactly the same way you’d do it for an independent source. The trick is that at some stage — preferably at the end of the calculations — you have to define the dependent voltage or current for the controlled sources. It does not matter at all whether this value is independent or dependent. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 17
0.5.7 Thevenin« and Norton equivalents The electrical behavior of every linear circuit can be modelled as a single source and a single impedance. This can easily be explained if we use the definition of linear circuits: the electrical character has to be described with a linear function, which in its turn can be uniquely described with just two points. In linear circuits, it is convenient to choose two points where the load is Z =0Ωand Z →∞Ω. In words: calculate the open terminal voltage of a (sub)circuit, without any load, and calculate the current which would flow if you short circuited the circuit, and the two convenient points are determined. From this, a simple model can be constructed with just one source and one impedance. If the equivalent uses a current source, we call it a Norton equivalent, while a model with a voltage source is called a Th´evenin equivalent. Both are named after their discoverers, respectively in 1883 [1] and 1926 [2]5.
Z1 Z3 +
v Z2 Z4 i - a)
ZEQU +
vEQU iEQU ZEQU -
b) c)
Figure 0.8: A random linear circuit with its Thevenin« and Norton equivalents
The circuit in figure 0.8a has its Th´evenin and Norton equivalents shown in, respec- tively, figures 0.8b and c. The open circuit voltage and short circuit current for this example are:
Z4 Z2//(Z3 + Z4) vopen = −i · (Z4//(Z3 + Z1//Z2)) + v · · Z3 + Z4 Z1 + Z2//(Z3 + Z4) 1 Z3//Z2 ishortcircuit = −i + v · · Z3 Z1 + Z3//Z2
According to Ohm’s law, the following equivalent circuits hold:
vEQU = vopen
iEQU = ishortcircuit vopen ZEQU = ishortcircuit
5The equivalent with a voltage source is called the Th´evenin equivalent, although Helmholtz published the same theory 30 years earlier. The work of Helmholtz, however, did not receive any recognition. 18 CHAPTER 0. INTRODUCTION
0.5.8 Linear networks and signals A linear network consists of linear components: resistors (with a instantaneous linear relation between voltage and current), capacitors and inductors (with an integral or differential relation between v and i). The input source can either be a current source or a voltage source. One of the best characteristics of a linear circuit, is the fact that the input signal emerges undistorted at the output. At first, this might seem strange: if we put a square wave in a linear circuit, we generally do not get a square wave at the output. The reason is that the input signal can be viewed as a series of signals that remain undistorted, but that may get a different phase or amplitude, see 0.5.9 for a discussion on this topic. The types of signals where the output signal is a shifted and scaled version of the input signal s, are those that satisfy the following mathematical relation:
∂s(t) ∝ s(t + τ) ∂t
Signals that satisfy this are sin(ωt + φ) and e(a+jb)·t, in other words: harmonic and exponential signals. Euler has shown [4] that these two types of signals are related6: here ejb·t is a rotating unit vector in the complex plane with angle bt. The represen- tation of this on the real axis is cos(bt), while the imaginary part is j · sin(bt). From this, it follows that:
e(a+jb)t = eat · (cos(bt)+j · sin(bt)) ejωt − e−jωt sin(ωt)= 2j ejωt + e−jωt cos(ωt)= 2 With this new knowledge, it is also very easy to deduce the impedance of reactive elements. For example, for a capacitor it follows (based on a harmonic signal): v Z = C i ∂v i = C · ∂t ∂V · sin(ωt) = C · c ∂t = C · ω · Vc · cos(ωt) ◦ = C · ω · Vc · sin(ωt +90 ) sin(ωt) 1 Z = = C C · ω · sin(ωt +90◦) jωC
6The proof of this is remarkably simple if we take the Taylor expansions of an exponential function 2 3 2 4 6 x =1+ + x + x + ( )=1− x + x − x + and of a sinus and cosinus: e x 2! 3! ..., cos x 2! 4! 6! ... and 3 5 7 ( )= − x + x − x + 0 =1 1 = 2 = −1 3 = − sin x x 3! 5! 7! .... If we remember that j , j j, j and j j then it immediately follows that ejx = cos(x)+j · sin(x). 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 19
0.5.9 Fourier transformations The basic signals used to analyze linear circuits — the sinusoidal functions — have a close correlation to Fourier analysis. Fourier stated [5] that every periodic signal f(x) can be written as an infinite sum of harmonic signals:
f(x)=a0 + a1 · cos(x + φ1)+a2 · cos(2x + φ2)+... Using a number of goniometric relations, the Fourier transformation of a signal is obtained. The relevant equations here are: 2π 2π sin(x)dx =0 and cos(x)dx =0 0 0 b a · cos(x)+b · sin(x)= a2 + b2 · cos(x − atan( )) a 1 1 sin(x) · sin(y)= cos(x − y) − cos(x + y) 2 2 The first two relations state that the average of an harmonic signal equals 0. The third relation states that the sum of a sine and a cosine with the same argument can be written as one harmonic function with that argument and a phase shift. The fourth relation is crucial: the product of two harmonics equals the sum of two harmonics, one with the difference between the arguments, the other with the sum of the arguments. From the first three relations, it immediately follows that if a periodic signal with angular frequency ω can be written as the sum of harmonics, then those harmonics must have angular frequencies which are an integer multiple of the angular frequency of the original signal. Now, a new relation can be written:
f(ωt)=a0 + a1 · cos(ωt + φ1)+a2 · cos(2ωt + φ2)+... th Notice that the a0-term corresponds to the 0 harmonic, or in fact the a0 · cos(0) term. The above relation can already be used to perform Fourier transformations: all an terms and all φn factors would have to be determined. However, in general, rd determining the φn factors can be very difficult. Using the 3 goniometric relation, we can simplify this process. This gives us the most widespread Fourier formula:
f(ωt)= a0 + a1 · cos(ωt)+b1 · sin(ωt)+a2 · cos(2ωt)+b2 · sin(2ωt)+... From the fourth goniometric relation, together with the first two, the relation to deter- mine an and bn can be derived quite easily: 2π 2π 1 sin(x) · sin(x)dx = cos(x) · cos(x)dx = · 2π = π → 2 0 0 1 2π 2 T a = f(x) · cos(x)dx = f(ωt) · cos(ωt)dt n π T 0 0 1 2π 2 T 2π 1 bn = f(x) · sin(x)dx = f(ωt) · sin(ωt)dt T = = π 0 T 0 ω f The Laplace transformations are closely correlated to Fourier transformations: one of the most important “differences” is the use of ejx instead of sin(x) and cos(x).In this book, the Laplace and Fourier transformations are not explicitly used; the most important thing is to realize that every periodic signal exists of harmonic components. 20 CHAPTER 0. INTRODUCTION
0.5.10 Differential equations Something different which is closely related to the basic signals in linear systems are differential equations and their solutions. Usually, it is very convenient to analyze circuits in the frequency domain using complex impedances. In order to do so, the circuit must be linear. Electronic circuits usually satisfy this condition, or are modelled as such in order to be able to use complex impedances and frequency domain analyses. However, not all circuits can be linearized. In those cases, it might not be allowed to use complex impedances and the original element equations must be used which can only be analyzed in the time domain. This usually gives a differential equation. Below is a short summary for 1st and 2nd order differential equations: dx B · + C · x = D dt dx2 dx A · + B · + C · x = D dt2 dt It is evident that the resulting signal x(t) has a derivative which has the same shape as the signal itself, i.e. either exponential or harmonic. The exponential form is the most general one, and thus is used most of the time. The easiest solving method7 is to substitute the most general form and solve the missing parameters for a homogenic solution:
x(t)=Xea·t C aB · Xea·t + C · Xea·t =0 → a = − B √ −B ± B2 − 4AC a2A · Xea·t + aB · Xea·t + C · Xea·t =0 → a = 2A As you can see, there is just one solution for first-order differential equations, and two for second-order differential equations. (And yes, three for a third order differen- tial equation.) These two solutions can be complex, in which case an (exponentially increasing or decreasing) harmonic solution results: Xe(a+jb)t + Xe(a−jb)t = Xeat · ejbt + e−jbt =2Xeat · cos(bt)
When there are only real solutions, the output is the sum of two exponential functions. The particular solution, where D is also implemented, has to be solved next. This usu- ally takes some tricks8. From all initial conditions, the rest of the missing parameters can be determined.
7A different, simple, solution for first-order differential equations is separation of variables and inte- gration. 8 Tricks or knowledge. If D is a constant, the particular solution xparticular = constant can be tried. The same goes for D = sin(ωt) where xparticular = A · sin(ωt)+B · cos(ωt) can be tried. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 21
0.5.11 Circuit analysis methods A number of circuit analysis methods for (linear) electronic circuits are well known. The most common methods are the nodal analysis and the mesh analysis; in this book, we will mostly be using the brute force approach. All these methods are very system- atic, and while the first two are very well suited for implementation in software, the third method gives more insight (although it is difficult to automate in software).
• While using the nodal analysis, you calculate the total current in every node. According to Kirchhoff’s current law, this current must be zero for every node. For easy calculation of these currents, passive components are replaced with admittances instead of resistances (impedances) and only current sources are used. For voltage sources, a Norton equivalent is used. If this method is performed properly, a network of N nodes gives a set of N independent equations, which can be solved to give all voltages.
g3
g2 g5 n1 n2 n3
J1 g1 g4 g6
Figure 0.9: Example network for node analysis.
v1 · (g1 + g2 + g3) − v2 · g2 − v3 · g3 = J1 −v1 · g2 + v2 · (g2 + g4 + g5) − v3 · g5 =0 −v1 · g3 − v2 · g5 + v3 · (g3 + g5 + g6)=0
Solving this set of equations can be done by hand quite straightforwardly, for example by using Gaussian elimination. This set of equations can also be solved easily in software. For that, usually the set of equations is written in matrix- form: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ g1 + g2 + g3 −g2 −g3 v1 J1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ −g2 g2 + g4 + g5 −g5 v2 = 0 −g3 −g5 g3 + g5 + g6 v3 0 Solving this equation numerically can be done easily using matrix inversion. Matrix inversion in software is usually implemented via LU-decomposition, Gaussian elimination and backward substitution. You can also do it by hand, which is the boring non-insightful method you’re taught to do. Sorry to inform you about that.
• The mesh analysis calculates the total voltage within a mesh. From Kirchhoff’s voltage law, we know that this summed voltage must be equal to 0V . Just as 22 CHAPTER 0. INTRODUCTION
with the nodal analysis, a set of equations is formulated, which has to be solved. Since the mesh analysis uses voltages, we must replace every current source with its Th´evenin equivalent scheme. The circuit in 0.10 then gives:
Z3
m2 Z1 Z2 Z5 n1 n2 n3 +
E1 Z4 Z6 m - 1 m3
Figure 0.10: Example network for mesh analysis.
i1 · (Z1 + Z2 + Z4) − i2 · Z2 − i3 · Z4 = E1 −i1 · Z2 + i2 · (Z2 + Z3 + Z5) − i3 · Z5 =0 −i1 · Z4 − i2 · Z5 + i3 · (Z4 + Z5 + Z6)=0
or equivalently ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ Z1 + Z2 + Z4 −Z2 −Z4 i1 E1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ −Z2 Z2 + Z3 + Z5 −Z5 i2 = 0 −Z4 −Z5 Z4 + Z5 + Z6 i3 0 This can again be solved with Gaussian elimination and backward substitution: fancy terms for simply working in a systematic manner to solve a set of linear equations. Just as with the nodal analysis you can do it by hand, but a computer is much better at it. To make it worse, you probably do not get a lot of insight from doing these matrix inversions by hand. • The brute force approach subdivides the problem in a systematic manner, un- til every small sub-problem is not a problem anymore. Substituting every- thing back gives the desired answer. The method uses (for electronic circuits) Kirchhoff’s voltage law, Kirchhoff’s current law, and the element equations, whichever comes in handy at that instant. In a circuit with Nm voltage meshes, Nk nodes and Nc electronic components, obtaining the answer takes a maximum of (Nm + Nk + Nc) derivation steps, and another (Nm + Nk + Nc) substitution steps. The first equation that should be written down equals the desired answer. If a voltage transfer H is requested, then the first statement is a small elaboration (or specification) of the question itself: v H = OUT vIN Next, every unknown on the right hand side of the relation must be solved using KVL, KCL or an elementary equation. Here, multiple approaches are available 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 23
depending on the choices made. It is important to recognize that all variables for which an expression has already been derived (and hence, is on the left side of the “=”-symbol) is known. For the transfer of the voltage from E1 to vn2 in figure 0.10, we will get, for example:
v 2 H = n vE1 vn2 = iz4 · Z4 (EE)
iz4 = iz2 + iz5 (KCL) vn1 − vn2 iz2 = (EE) Z2 vn3 − vn2 iz5 = (EE) Z5 vn1 = vE1 − vz1 (KV L)
vn3 = iz6 · Z6 (EE)
vz1 = Z1 · (iz2 + iz3)(EE)
iz6 = −iz5 + iz3 (KCL) vn1 − vn3 iz3 = (EE) Z3
Substituting these equations from the bottom up, gives the desired relation. It seems like a lot of cumbersome work, but other methods need just as much (or even more) effort. Below, a portion of the substitution is presented. While calculating the vn3, we get an expression which is a function of vn3 again. This means that there are loops in the circuits: feedback paths from the output of your circuit to the input, for example. The only correct method to continue, is to separate the variables, as shown below9.
vn1 − vn3 iz6 = −iz5 + Z3 vn1 − vn3 vz1 = Z1 · (iz2 + ) Z3 vn1 − vn3 vn3 = −iz5 · Z6 + · Z6 ⇐⇒ (separate variables) Z3 vn1 vn3(1 + Z6/Z3)=−iz5 · Z6 + · Z6 ⇐⇒ Z3 Z6Z3 vn1 vn3 = −iz5 · + · Z6 Z3 + Z6 Z3 + Z6
As a next step the other variables must be calculated, requiring some rewrit- ing. Smaller circuits or circuits without loops (here, with Z3), are much less work. The brute force approach will be used for small circuits within this course. Larger and more complex circuits will be divided in subsystems and calculated
9The other method is recursive, with as associated problem that that method never ends. 24 CHAPTER 0. INTRODUCTION
one at the time (or will not be analyzed at all).
vn1 = ...
iz2 = ...
iz5 = ...
iz4 = ...
vn2 = ... H = ...
The nice thing about this brute force approach is the fact that you are working to- ward solving for one a specific answer, while your method is based on the divide and conquer method. With the brute force approach a complex problem — e.g. calculating some transfer function or impedance — is divided into many very simple problems — e.g. element equations, KCL, KVL — which are combined to get the complete answer. Another positive aspect of the brute force approach is that — as you gain more experience in this type of approach — the gained knowledge allows for quicker analysis.
0.5.12 Transfer functions In electronics, we often search for a relation between the input signal and something which is a consequence of that signal. Usually this consequence is an output signal, meaning that we often have to find a transfer function. Other common relations are (among others) the input and output impedance of an electronic circuit:
signal H(jω)= out signalin vin Zin = iin vout Zout = iout To analyze, sketch or interpret these transfer functions or impedances it is usually convenient to rewrite the original function as (a product or sum of) standard forms. There are several standard forms; for a low-pass-like transfer function, we have:
· 1 H(jω)=H(ω0) ω j ω0 · 1 H(jω)=H(0) ω 1+j ω0 1 = H(0) · 1+jω · τ0 1 H(jω)=H(0) · ω 2 ω2 1+j · + j 2 ω0 Q ω0
The first form corresponds to an integrator, which is just a limit case of the second form. The second and third forms are identical, and have a first-order characteristic; 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 25 the fourth mode has a second-order characteristic. High-pass characteristics can be obtained from low-pass functions, using: jω → ω0 ω0 LP jω HP From this it follows that: ω H(jω)=H(ω0) · j ω0 j ω ∞ · ω0 H(jω)=H( ) ω 1+j ω0 jωτ0 = H(∞) · 1+jωτ0 2 ω2 j ω2 H(jω)=H(∞) · 0 ω 2 ω2 1+j · + j 2 ω0 Q ω0 The order of any transfer function is simply equal to the highest power of ω. Every normal transfer function, of arbitrary power, can be written as the product of first and second-order functions. Knowing the three basic standard forms for low-pass characteristics by heart and being able to do some basic manipulations pretty much covers everything you will ever need to visualize transfer functions of impedances as a function of frequency. 26 CHAPTER 0. INTRODUCTION
0.5.13 Bode plots
A Bode plot is a convenient method for presenting the behaviour of a (linear) circuit; this is done by plotting the magnitude and phase shift of a transfer function as a func- tion of the frequency. Here, the magnitude and frequency are plotted on a logarithmic scale, which proves to be very convenient10. Before we dive into Bode diagrams, we first repeat a number of mathematical logarithmic rules:
log(x)+log(y)=log(x · y) log(xy)=y · log(x) | ≈ log(x + y) x< In words: • the product of two values on a logarithmic scale equals the sum of those two values. • a relation x = yz on a log-log scale gives a straight line with a slope of z; where z can be any real number. • the sum of 2 values, as approximation, equals the largest of the two values on a logarithmic scale. Only if these numbers are more or less the same size, this rule does not apply. To calculate the argument of a (complex) transfer function, the known rules for work- ing with complex numbers are used. For example, the standard form of a first-order · 1 low-pass characteristic, H(jω)=H(0) 1+j ω gives: ω0 • for ω<<ω0, a transfer function almost equal to H(0), with a phase shift of 0o. −1 • for ω>>ω0, the transfer function is almost H(0)ω0 · ω , with a phase shift of −90o. √ • for ω = ω0, the transfer function equals H(0)/ 2, with a phase shift equal to −45o. The modulus of this transfer function — as a function of frequency — can be approxi- mated by a constant at low frequency, and a straight line with a slope equal to -1 at high frequencies, when both magnitude and frequency are plotted on a logarithmic axis. The phase at low frequencies equals 0o,is−45o at ω = ω0 and approaches −90o for high frequencies. When plotted on a linear phase-axis and a logarithmic frequency axis this results in an S-shaped curve. The thick curves in figure 0.11b give the modulus of the first order transfer func- tion (as a function of frequency) in a log-log plot. The asymptotic approximation, as described above, is given by the two dashed lines. The phase characteristic as a func- tion of the frequency is given in figure 0.11c; obviously, this is presented on a lin-log 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 27 nd 2 order, Q=2 2H(0) nd 2 order, Q=1 |H(jw )| nd 2 order, Q=0.4 H(0) st 1 order 0.1w w 10w w (log) a) 0 H(0) arg(H(jw )) |H(jw )| -/2p 0.1H(0) p 0.01H(0) - 0.1w ww10w 0.1w 10w w (log) w (log) b) c) Figure 0.11: The Bode plots of a first-order and 3 second-order transfers: a) modulus of the transfer on a lin-log plot: wrong and too little detail for low H(jω) b) the correct modulus plot, on a log-log scale c) the correct phase plot, on a lin-log scale. scale. For comparison, figure 0.11a gives the modulus as a function of frequency on a lin-log scale, which clearly shows that for small moduli many details are lost. Figure 0.11 also gives the modulus and phase characteristic (which together form the Bode diagram) for second-order transfers. From the standard mode of a second-order 1 low-pass transfer, H(jω)=H(0) · 2 immediately follows: 1+ ω + 2 ω j · j 2 ω0 Q ω0 • for ω<<ω0 the transfer is almost H(0), with a phase shift of 0o. −2 • for ω>>ω0 the transfer is almost H(0)ω0 · ω , with a phase shift of −180o. • for ω = ω0 the transfer is H(0) · Q, with a phase shift of −90o. The modulus characteristic is, on a log-log plot, easily drawn asymptotically where some extra attention has to be paid to the modulus for ω = ω0. Other transfer functions can easily be constructed using the previously stated mathematical rules. 10Obviously, there are numerous other methods, some of these will be covered later on in this book. 28 CHAPTER 0. INTRODUCTION 0.5.14 Calculations & mathematics As every sane human knows, calculations (or mathematics for more complicated cal- culations) is a necessity for describing something in an exact manner. Without calcu- lations, there would only be vague statements like “if I change something here, then something changes over there” or “if I press here, it hurts there”. Those statements are completely useless! As in any sensible scientific field, in electronics we like to get sufficiently exact relations that are described in an exact language: mathematical terms. In the past (no guarantee for the future!), it appeared that many students made errors in basic calculations, in mathematic manipulations and in basic calculation rules. To refresh some basic math knowledge, this section reviews some of the most basic math rules. The basics The basis of almost all math is the equation, or a “=” with something on one side, and something else on the other side. What those somethings are, is not of importance, but I know for a fact that the two somethings are equal to each other in some way. These days, in elementary school, students do math with apples, pears and pizzas: 1 1 pizza + pizza = pizza (0.1) 2 2 Nonsense! Even if you would assume all pizzas to be of exactly the same size, shape and appearance (ingredients and their location), it would still depend on how you slice the pizza in half. It is possible to slice a pizza in half in, more or less, ∞ different directions, and if I cut the half pizzas in (0.1) in two different directions from full pizzas, then there is no way the two of them will be one complete pizza again, although it is suggested by (0.1)11. In electronics, our job is much easier: we use (integer) numbers of electrons, (a real number of) electrons per second or (real) energy per electron: with charge, or current and voltage. We might possibly add flux if we are talking about inductors, but the physics gets a bit more complicated since we would have to take relativity and Einstein in to account. In general, we are dealing with matters that can easily be added, subtracted, divided and multiplied. The basics for doing math with these terms is simply the equation: something = something (0.2) often written in a somewhat different form: somethingform1 = somethingform2 (0.3) It clearly states that the part left of the “=”-symbol is equal to the part on the right. More specific: its magnitude is equal, not its form. Often, we would like to rewrite the equation to have something simple on the left (we “read” from left to right) which is 11Yes, if you take the two halves from the same pizza it would still be incorrect, since you would have two halve pizzas with a cut in it. If you thought that it is the same, think about two halves of bicycle tires, two half legs or two half glasses. It is not a smooth ride, does not walk very comfortably and you can’t drink beer from it. 0.5. PREPARATORY KNOWLEDGE FOR THIS BOOK 29 understandable (monthly pay, speed, impedance, ...) and a form on the right with all other variables. This is what is called an equation or relation: if you change something on the right hand side, something also changes on the left hand side, and vice versa. Such mathematical relations give the relation between different parameters and are very valuable in analyses and syntheses12. Basic rules The most basic rules for relations are: something = something something · somethingelse = something · somethingelse something + somethingelse = something + somethingelse something = something · 1 These rules do not appear to be very difficult, but in fact they are. Specifically the last rule appears to be very difficult, since what is this factor 1? A “1” can be written in numerous different ways: an infinite number of ways in fact. From the first two rules surely follows: something1 1= something1 something1 something = something · something1 and choosing a convenient factor something1 takes some skills. It requires you to know what you want to know. But you should have already formulated that in §0.6, so it should not be a problem. Basic math rules In addition to the basic rules above, it is also assumed that the basic mathematical rules for exponential functions are known and can be applied by you. Also, the derivatives of some basic functions must be known by heart. If you remember how esomething and harmonic signals (sine and cosine) are related, then you have enough knowledge to start off in this book. If you have some skill in manipulations with equations, can work in a structured way, have some perseverance and some confidence in yourself, then you should be just fine! 12Many people wrongly call “relations” “formulas”; please don’t! A formula is a recipe where you put some numbers in, and you get another number back. There is a reason why they are used in fairytales and other falsehoods: it’s because they like to keep things vague and unclear. A relation gives a (causal) connection between parameters and hence gives more information and can be applied to a much wider scope than a formula. 30 CHAPTER 0. INTRODUCTION 0.5.15 Simplifying relations In this book relations will be derived frequently: mostly for impedances and transfer functions. Relations will be derived, since these relations will help you to analyze, understand, optimize and synthesise things that are impossible with numerical meth- ods. While deriving these equations, it would come in handy if you have some skill in simplifying equations. Simplifying equations comes down to just a small number of basic tricks: • multiply by 1 • the equation ”something=something” The challenge is in choosing the correct 1. The transfer of a voltage divider consisting out of a capacitor and a resistor, could for instance be: 1 jωC H(jω)= 1 jωC + R which is a pretty ugly expression, which can become more understandable with a mul- tiplication of 1. If you choose the correct 1, that is... 1 jωC jωC 1 H(jω)= 1 · = (well chosen 1) jωC + R jωC 1+jωRC 1 1−ea 1 − ea H(jω)= jωC · = jωC (not so well chosen 1) 1 − a 1−ea − a jωC + R 1 e jωC + R(1 e ) Often, in larger circuits and systems, a signal might be a function of itself. Simpli- fying those relations comes down to choosing a useful ”something” in the equation ”something=something”. For instance, the relation y = a · x + b · y looks nothing like a closed expression if you want to know y. The solution is obviously “separation of variables”, a trick which comes down to adding “something=something” to something13. A well chosen “something=something” gives: y − b · y = a · x + b · y − b · y (something = −b · y) y · (1 − b)=a · x Simplifying even further can be easily done by multiplying with a well chosen ”some- thing=something”, like: 1 1 1 y · (1 − b) · = a · x · (something = ) 1 − b 1 − b 1 − b a · x y = 1 − b Hence, in order to simplify a relation, it is of great importance to know the multiplica- tion table of 1 by heart and to be able to use the equation 1=1. This seems easy, but it usually proves to be very difficult. 13If you add the relation something=something to something, you are obviously actually adding noth- ing. 0.6. SOLVING EXERCISES 31 0.6 Solving exercises Most problems can be tackled in the same general way. The method may seem overly logical, resulting in steps being skipped, which in its turn leads to incorrect results and more work for you. 1. Understand the question, then try to specify it. For example, if you are asked for an output impedance, start writing something like: zout =? or ∂vOUT vout zout = = =? ∂iOUT iout This gives you a clear direction for the elaboration. You can also check after- wards whether you have actually calculated what you wanted to know. 2. Make a drawing/schematic where all relevant items for this specific problem are presented. Leave out everything that is not important. You might need multi- ple drawings / schematics to obtain a final version. Putting something together quickly usually yields incorrect results or causes unnecessarily complex calcu- lations. 3. Work in a structured manner towards the answer. This can be done in several ways, some of which will be presented in this book. 4. Verify your answer: • Check whether it is actually the answer to the question. • Check whether the dimensions (units) agree. If the dimensions are correct, then it might be the correct answer. Example: V / 1+ ·As/ (1+ · ) A A V Ro id Cin zout = 1+j ω must be wrong: Ω = 1+1 ω0 ⎛ ⎞