THE PENNSYLVANIA STATE UNIVERSITY SCHREYER HONORS COLLEGE

DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING, PENN STATE BEHREND

AMPLIFIER DESIGN USING NEURAL NETWORKS

DAPHNE CRUZ HIDALGO SPRING 2017

A thesis submitted in partial fulfillment of the requirements for a baccalaureate degree in Electrical Engineering with honors in Electrical Engineering

Reviewed and approved* by the following:

Thomas L. Hemminger Professor of Electrical and Computer Engineering Honors Adviser Thesis Supervisor

Mohammad Rasouli Assistant Professor of Electrical and Computer Engineering Faculty Reader

* Signatures are on file in the Schreyer Honors College. i

ABSTRACT

This thesis consists of an analysis of how neural networks can help in design.

The thesis will be focusing on the design of using Bipolar Junction (BJT). BJTs were invented in 1948, and began a revolution in . There are three basic topologies for BJT amplifiers: , , and . These are used as a voltage amplifier, voltage buffer, and current buffer, respectively. The circuits for these amplifiers are quite involved and they have multiple components and parameters. A neural network may likely assist the design of these circuits.

Neural networks provide a computational approach to solving many problems that are thought to need human intuition to solve. They can be trained to recognize patterns by taking in a set of inputs and arriving at an expected result. Neural networks are modeled after the biological neural network in the human brain. This thesis will explore the advantages of using a neural network to design one of the aforementioned amplifier circuits, namely the common collector, or otherwise known as the emitter follower circuit

In practice, the network should be able to be trained to accept parameters of collector-emitter junction voltage, , and and in theory it would suggest different possible combinations to the user and thus save the time required to calculate the by traditional methods. The network would make it easier, especially for professors who ask their students to solve BJT amplifier circuits, by supplying them with rapid solutions. It can be difficult for a professor to glance at resistor selections made by a student and decide whether the results are correct. However, it may be simpler to compare the student’s resistor combinations to those suggested by the neural network to verify the solutions. ii

TABLE OF CONTENTS

LIST OF FIGURES ...... iii

LIST OF TABLES ...... iv

ACKNOWLEDGEMENTS ...... v

Chapter 1 Background Information ...... 1

Neural Networks ...... 1 Bipolar Junction Transistors ...... 5

Chapter 2 Common Collector Amplifier ...... 10

Determining RIN and ROUT Experimentally ...... 11 Determining RIN and ROUT Analytically ...... 15

Chapter 3 Neural Network to Determine Component Values ...... 19

Chapter 4 Conclusions ...... 21

Appendix A MATLAB Code for Training ...... 22

Appendix B Neural Network Testing ...... 24

BIBLIOGRAPHY ...... 26

iii

LIST OF FIGURES

Figure 1. Sigmoid activation function [5]...... 2

Figure 2. Neural network with one hidden layer [7]...... 3

Figure 3. A two-dimensional representation of the parity problem...... 4

Figure 4. A three-dimensional representation of parity problem [5]...... 5

Figure 5. BJT transistor: (a) PNP schematic symbol, (b) physical layout (c) NPN symbol, (d) layout [4]...... 6

Figure 6. Common emitter amplifier [9]...... 7

Figure 7. Common collector amplifier [9]...... 7

Figure 8. Common base amplifier [9]...... 8

Figure 9. Common collector amplifier topology...... 10

Figure 10. Common collector amplifier AC equivalent circuit...... 11

Figure 11. Example common collector circuit...... 12

Figure 12. of the transistor circuit...... 12

Figure 13. Voltage divider circuit to calculate Rin...... 13

Figure 14. Finding Rin experimentally...... 13

Figure 15. Voltage divider circuit to find Rout...... 14

Figure 16. Finding Voc experimentally...... 14

Figure 17. Finding Vloaded experimentally...... 15

Figure 18. DC equivalent circuit of a common collector amplifier...... 16

Figure 19. Single-loop circuit to find base current...... 16

Figure 20. AC equivalent circuit of example common collector...... 17

iv

LIST OF TABLES

Table 1. XOR parity truth table...... 4

Table 2. Resistor values used to develop output parameters...... 19

Table 3.Statistical performance of neural network...... 20

v

ACKNOWLEDGEMENTS

I would like to thank my parents for their unconditional support and by constantly pushing me to be a better student. They are both my reason for pushing forward and have helped me through the toughest times. I also want to thank the Shako family for being my second set of parents. I admire them and I hope I have made them proud.

I would also like to thank Sharon Hemminger of the honors program and Dr. Thomas

Hemminger, my thesis advisor, for being supportive of me and always extending a helping hand as I moved so far from home to a new city. Thank you for being like family to me at Penn State Behrend. 1

Chapter 1

Background Information

Neural Networks

Neural networks provide a computational approach to solving the many problems and questions that we have asked ourselves with regard to pattern recognition [1]. We often wonder what it is that makes humans able to recognize patterns such as peoples’ faces, our pets, and a host of other things.

Computers can help us resolve our questions regarding mathematics and logic, but how can they help with pattern analysis if they lack our sense of intuition and recognition? How can they provide an answer to whether a set of objects belongs to a specific group or family? Finally, how can they recognize patterns and predict possible outcomes if they lack the perceptions that we can make in our minds? This is why neural networks are so important when we need computers to assist us on our day-to-day lives and to help in our goal of artificial intelligence. Although this thesis will be focusing on the power of artificial neural networks in pattern recognition, they could be used for many other purposes. Some additional applications of these neural networks include function approximation or regression analysis, image compression, character recognition, market predictions and many other applications [2].

Artificial neural networks provide a computational approach correctly described by the author

Michael Nielsen as a “beautiful, biologically-inspired programming paradigm which enables a computer to learn from observational data.” These networks are modeled after our own biological neural networks.

It would make sense for researchers to look for inspiration in our minds to model machine learning techniques which would make their “thinking process” more similar to ours. Researchers have developed models of intelligent systems composed of arrays of elemental processors called neurons [1]. 2

Artificial neural networks are composed of simple elements known as perceptrons, which operate in parallel by being interconnected via weights and inputs [3]. These perceptrons, also known as activation functions, are non-linear with outputs usually ranging from 0 to 1, or -1 to 1. A neuron is connected to other neurons by its input and output links. The neuron sums the incoming weighted values and this value is input to an activation functions. The output of an activation function is the output from the neuron. One of the most common activation functions, the Sigmoid function, is shown below in

Figure 1. Networks are trained so that a particular input will yield a desired target value within a given input set. Then, a previously unseen input should be able to be presented to the network and be categorized as to being within a particular set or not [4]. The connections between the neurons are weighted and they can be trained by adjusting the weights on these connections by one of several methods such as back- propagation or other more sophisticated methods like that of Levenberg-Marquardt [5].

Figure 1. Sigmoid activation function [4].

The history of neural networks is a rich one. In 1943 Warren McCulloch and Walter Pitts, a neurophysiologist and a mathematician respectively, wrote a paper on how they believed biological neurons functioned. They modeled their findings using electrical circuits [2]. As computers became more advanced, the IBM research laboratory attempted to create a neural network. Nathaniel Rochester spearheaded the attempt; unfortunately, he was unsuccessful. It was only a couple of years later that

Bernard Widrow and Marcian Hoff at Stanford developed the first practical neural network. Widrow and 3

Hoff were very successful in their development of the artificial neural network with a single layer. These layers are made up of interconnected nodes, or neurons, and they each contain an activation function, which is a non-linear function. The input layer receives all the patterns the user would like for the neural network to analyze and it then communicates to the hidden layers. Figure 2 shows a visual representation of the connections between the nodes and the layers in a neural network [2].

Figure 2. Neural network with one hidden layer [2].

In 1975, thirteen years after Widrow and Hoff, other researchers developed a neural network that had multiple layers and was able to do multiple calculations in parallel. In 1986, David Rumelhart, developed a technique known as back propagation which trained many layers. These networks need many iterations to learn, as compared to the other techniques known as hybrids, which only use two layers [3]. Research on neural networks moved faster as compilers became more powerful. In the coming years, it could take some neural network systems a great deal of time to learn, and thus faster processors could make it more attractive for researchers to continue developing this area of research [2].

One of the early, and classical, applications of neural networks is the parity problem, sometimes known as the XOR problem. Table 1 shows the truth table for the XOR problem: where equal digital inputs yield an output value of 0, while unequal inputs yield an output of 1. This circuit is commonly used in digital adders in computers and calculators. However, it is important to note that the outputs are not linearly separable, i.e., they cannot be separated into two classes by a single straight line. Figure 3 4 below is a graphical representation of the parity problem. In this figure, there are two groups, the output of

1 is represented by Os and the output of 0 is represented by Xs. These two groups are divided in such a way that it is impossible to group them by drawing a straight line that separates the groups [5]. On the other hand, if the problem is thought of in three dimensions with the inputs in one plane and the outputs orthogonal to that plane, the sets can be separated by a single plane by using the simple network shown below in Figure 4, which is non-linear; a 3D model of the plane necessary to separate the two groups. It should be pointed out that it is important that the activation functions are often non-linear, otherwise the network can be collapsed into a linear separator, which cannot solve this, and many other problems. In this case, an activation function such as the Sigmoid function that was shown before would work.

Table 1. XOR parity truth table.

X Y Output 0 0 0 0 1 1 1 0 1 1 1 0

1 ○ ○ x ○ x x ○ ○ x x ○ x ↑ x x x ○ ○ x x ○ ○ x ○ ○ x x ○ ○ 0 y→ 1

Figure 3. A two-dimensional representation of the parity problem.

The two classes cannot be separated by a single straight line. 5

Figure 4. A three-dimensional representation of parity problem [4].

A plane separates the two classes.

Bipolar Junction Transistors

This research provides an argument for the integration of neural networks with Bipolar Junction

Transistor (BJT) amplifier topologies. A bipolar junction transistor is a three-terminal semiconductor device. It was the first mass-produced transistor and it was originally developed in 1948 at the Bell

Telephone Laboratories in New Jersey [6]. The BJT is made of heavily doped silicon. Doping refers to the process of adding impurities to the silicon in order to change its electrical properties. As mentioned previously, there are three terminals to this device: an emitter, a base, and a collector. These could be in an

NPN configuration or a PNP configuration; where N refers to semiconductor where the electrons are the majority, and P refers to semiconductor where the positive charges (holes) are the majority [7]. Figure 5 shows a visual representation of both types of BJT and their electrical symbols. 6

Figure 5. BJT transistor: (a) PNP schematic symbol, (b) physical layout (c) NPN symbol, (d) layout [7].

The invention of the BJT began a revolution in electronics. BJTs made it possible to create lightweight and inexpensive devices. MOSFETs were not developed for another decade, and Integrated

Circuits (IC) were developed 20 years later in 1968. Nowadays, BJTs have been displaced because of the introduction of MOSFETs and ICs but they are still preferred in high frequency and analog applications.

There are three basic topologies for BJT amplifiers: common emitter, common collector, and common base. Although these topologies can be very simple, we will be analyzing biased versions of the common collector. is very important in amplifier design because it establishes the correct operating point providing a linear region for the circuit to receive signals, which reduces in the output [8]. The common emitter amplifier is usually used as a voltage amplifier and it gets its name because the emitter terminal is connected to ground of the AC equivalent circuit. Figure 6 is a graphical depiction of what a biased common emitter amplifier looks like. The common collector is used as a voltage buffer, which is to say that its voltage gain is approximately 1 but its input and output impedances can be adjusted as needed; it gets its name because the collector terminal is connected to ground of the

AC equivalent circuit. Figure 7 illustrates a biased common collector circuit topology. Finally, the common base is used as a current buffer, which is to say that the current gain is also 1, and it gets its name because the base terminal is connected to ground of the AC equivalent circuit. The circuit topology for the common base is shown in Figure 8. 7

Figure 6. Common emitter amplifier [8].

Figure 7. Common collector amplifier [8]. 8

Figure 8. Common base amplifier [8].

BJT amplifier circuits are often designed and analyzed by electrical and computer engineering students in their junior year. The parameters necessary for the design of every BJT amplifier include: voltage gain (AV), input impedance (RIN), output impedance (ROUT) and the collector to emitter voltage

(VCE) which determines whether the amplifier is operating in the linear region. The neural network in this project should be able to accept the latter three parameters as inputs, considering that AV ≈ 1 in a common collector circuit. The network will then output possible values for external resistors and bypass capacitor values, which are necessary to yield useful designs. There are many applications of neural network systems providing this capability. Engineers designing BJT amplifiers for many applications can use them. It could potentially save an engineer a lot of time in calculations when knowing the specifications of the required output, and the neural network could yield resistor and capacitor values for them. This application can also allow experimentation with several designs, without having to perform detailed calculations [9]. Professors who may need to evaluate specific amplifier designs and grade the circuits supplied by the students could also reference this work. When students submit a circuit design, a professor must determine if the configuration and component values meet the requirements. When 9 students in a classroom are given liberty to design an amplifier circuit that could potentially be a different choice from typical designs, a professor would need to evaluate every solution to determine whether it meets the expected criteria. This neural network should make it more convenient for them to grade many designs, since the neural network would show them all possible component options.

10

Chapter 2

Common Collector Amplifier

This work focuses on the common collector amplifier. The voltage gain of this configuration, which refers to the ratio between output and input voltages, is approximately 1 and as such, it will be held as a constant. This is why this topology is known as a voltage follower circuit. As designed, this neural network can accept three parameters: input resistance (RIN), output resistance (ROUT) and the voltage that falls across the collector-emitter junction (VCE); and it should return the values for the three biasing resistors RB1, RB2 and the emitter resistor (RE). Figure 9 shows the biasing scheme that will be analyzed in this paper. For these experiments, the capacitors are considered ideal, which means they act as short circuits in the AC equivalent circuit but block their DC counterpart. The AC equivalent circuit is shown in

Figure 10. It is evident from Figure 10 why this circuit topology is known as common collector because the collector pin is grounded. Small signal analysis is useful to determine gain, input and output resistance. In the AC model the BJT is replaced with a voltage dependent and a resistor, which is the output resistance between the base and emitter, determined by looking into the emitter.

Figure 9. Common collector amplifier topology. 11

Figure 10. Common collector amplifier AC equivalent circuit.

Determining RIN and ROUT Experimentally

In order to determine the experimental parameters of the common collector circuit, an example circuit was recreated in PSpice and it is shown in Figure 11. A 10V DC power supply is connected to the collector and to one of the biasing resistors. A 50 Ω resistor was connected in series with the AC signal. A

2N3904 transistor was selected for this model because it is a common BJT transistor and the β for this transistor is often approximated at 160 which is the ratio of collector current to base current. The biasing resistors are usually very large and in this example, 2 MΩ resistors were selected. Both the emitter resistor and the load resistor were selected to be 1 kΩ. In addition, the capacitors were selected at 47 µF, which are large enough to act as near perfect AC conductors. A 0.2 V amplitude sinusoid was chosen as the input voltage to the BJT to prevent clipping of the output by driving the transistor into saturation. RIN and ROUT were determined experimentally in PSpice and VCE was selected to be between the 2 V and 8 V range. The voltage gain of this circuit was proven to be approximately 1 from Figure 12 by placing a 12 probe at the input voltage source and a probe over the load resistor. Both lines follow each other very closely, almost identically. In practice, the ratio is actually 0.96 but it is very close to 1.

Figure 11. Example common collector circuit.

Figure 12. Gain of the transistor circuit.

The next step was to determine the input resistance experimentally, RIN. A probe was placed to the left of the source resistance and another probe was placed to the right of the source resistance. The 13 output resistance of the source was increased to 50 kΩ in order to see a voltage drop across that resistor.

Everything to the right of the capacitor can be modeled as a single resistor, RIN. Therefore, this problem can be easily solved with a voltage divider as shown in Figure 13 with equation (1), where VX is the voltage measured to the right of the source resistance. The results are shown in Figure 14; where the smaller sinusoid represents VX and the larger sinusoid represents VIN. With a measured peak voltage for

VX of 148 mV and using 200 mV for VIN, RIN was determined to be 142 kΩ.

푅푖푛 푉푥 = 푉𝑖푛 ∗ ( ) (1) 50푒3+푅푖푛

Figure 13. Voltage divider circuit to calculate Rin.

Figure 14. Finding Rin experimentally.

The next step was to determine Rout experimentally. The first step was to determine open circuit voltage taken at the emitter terminal. Everything to the left of the load resistor can be modeled as the open circuit output voltage in series with the output resistance of the circuit. Again, the problem can be solved 14 with a voltage divider circuit, a model of which is shown in Figure 15. The expression that relates the open circuit voltage to the loaded voltage is shown by equation (2), where VY is the voltage taken across the resistor once it has been loaded by a 1 kΩ resistor. Initially, the load resistance was changed to be 1012

Ω to simulate an open circuit. The test to determine open circuit voltage was run for 3 ms and the results are shown in Figure 16; maximum voltage found was 192 mV. The test was repeated by changing the load resistor to a 1 kΩ which was run for another 3 ms and the results are shown in Figure 17. The maximum voltage found was 185 mV. With these values, the output resistance was calculated to be 33 Ω.

푅표푢푡 푉푦 = 푉표푐 ∗ ( ) (2) 1푒3+푅표푢푡

Figure 15. Voltage divider circuit to find Rout.

Figure 16. Finding Voc experimentally. 15

Figure 17. Finding Vloaded experimentally.

Determining RIN and ROUT Analytically

The next step was to determine the input and output resistances mathematically. The first thing to do was to analyze the DC equivalent model of this circuit; this is shown in Figure 18. In the DC equivalent model, all the capacitors are considered ideal and thus are considered as open circuits in order to find the currents and voltages from the power supply. From their Thevenin equivalents, the two biasing resistors are now in parallel and, as such, have an equivalent resistance of RB of 1 MΩ which was found through equation (3). To determine the DC biasing voltage the base resistors and source are replaced with their Thevenin equivalent, Vth, which is calculated through equation (4) to be 5 V. Figure 19 shows a simpler, single-loop circuit which can help identify the base current. It can be determined by Kirchoff’s

Voltage Law with equation (5). By solving for the base current, IB, we determine that it has a value of

3.70 µA. By using equation (6) we calculate the emitter current, IE, is equal to 596 µA and finally by using equation (7) we arrive at the conclusion that the collector current, IC, is equal to 593 µA.

푅퐵1∗푅퐵2 푅퐵 = (3) 푅퐵1+푅퐵2

푅퐵2 푉푡ℎ = 푉퐶퐶( ) (4) 푅퐵1+푅퐵2 16 −푉푡ℎ + (푅퐵)(퐼퐵) + (0.7푉) + (푅퐸)(훽 + 1)(퐼퐵) = 0 (5)

퐼퐸 = (훽 + 1)퐼퐵 (6)

퐼퐶 = 훽퐼퐵 (7)

Figure 18. DC equivalent circuit of a common collector amplifier.

Figure 19. Single-loop circuit to find base current.

Once the DC equivalent of the circuit has been analyzed, the DC source is set to zero by superposition, which creates the AC ground that is now connected to the collector terminal of the BJT. In the AC equivalent circuit, the capacitors are considered short circuits, which allow the sinusoidal component to be connected to the circuit once again. The AC equivalent model is shown in Figure 20, and this representation of the BJT is known as a T-Model. This model is more useful when there are 17 components attached to the emitter as opposed to the Hybrid Pi Model, which is commonly used with the common emitter topology. The DC currents that were calculated are important in determining critical values in the AC circuit such as , gm. Transconductance relates the current at the output of the device with the voltage at the input of the device. By using equation (8) and taking the thermal voltage VT as a constant of 25 mV, we calculate gm to be 0.0237 A/V.

퐼퐶 푔푚 = (8) 푉푇

Figure 20. AC equivalent circuit of example common collector.

The following steps are used to determine input resistance and output resistance. We must first find re , which is the resistance looking into the emitter terminal of a transistor. By using equation (8) it is calculated to be 42 Ω. The next step is to calculate input resistance RIN by taking the parallel combination of biasing resistors and the emitter resistance, which is shown in equation (9). Input resistance was calculated to be 144 kΩ. Finally, output resistance RO was calculated by finding the parallel combination between re and RE which is shown in equation (10). Output resistance was calculated to be 40 Ω.

훽 푟 = 푔푚 (8) 푒 훽+1 18 푅퐼푁 = (푅퐵1)||(푅퐵2)||(푟푒 + 푅퐸) ∗ (훽 + 1) (9)

푅푂 = (푟푒)||(푅퐸) (10)

These calculated results were very consistent with those measured experimentally. With the values for the input resistance and output resistance calculated analytically, and experimentally, and a range for VCE determined to be between 2 V and 8 V, we can move to creating a neural network that will be trained to calculate the biasing components. 19

Chapter 3

Designing a Neural Network to Determine Component Values

In this thesis, it has been decided to develop a method of amplifier design using feed-forward neural networks. Neural networks are non-linear systems and are often employed to differentiate between input patterns and their corresponding outputs. The larger biasing resistance values shown earlier were mainly used to determine correct calculations, but it has been determined that lower values in the 10k range are more appropriate for these tests.

In order to train the neural networks a set of “for” loops was created in MATLAB for the three biasing resistors (see the appendix A). For all of the tests, the resistor values ranged as shown in Table 2.

Table 2. Resistor values used to develop output parameters. Resistor Start Step Stop Value Value Value Rb1 4 kΩ 250 Ω 10 kΩ Rb2 4 kΩ 250 Ω 10 kΩ Re 400 Ω 100 Ω 1.5 kΩ

The values of RIN, ROUT, and VCE were calculated for all of the resistor combinations. Once this was completed, a neural network was trained using the input training values of RIN, ROUT, and VCE to compute an output set of the three biasing resistor values. In developing the network, the inputs and outputs were normalized to a magnitude of 1 to ensure convergence. There were 7500 training patterns employed in this project, limited to realistic values. For example, VCE was held to the range of 2 volts to

8 volts to insure linearity. It is important to note that when testing the system, a different set of patterns was employed to help determine if the network was properly trained. The program for determining the efficacy of the system is listed in appendix B). 20 The neural network package in MATLAB was utilized to train the networks, employing the

Levenberg-Marquardt algorithm, using one hidden layer of 16 sigmoidal, hyperbolic tangent (Tanh) neurons each. The network was trained for 2000 epochs resulting in a mean-squared error (mse) of approximately 7.1x10-7. Further training did not seem to improve performance. The objective was to map the transistor parameters to the values of the biasing resistors and this was accomplished.

Note that the number of patterns changes with the training and testing scenarios. This occurs because as the values of the biasing resistors change, the number of output parameters change, and some can fall out of the ranges specified earlier. Only those that fall within specified realistic ranges are employed in the tests. When using the resistance values illustrated in Table 2 the output parameters have the ranges shown in Table 3. It is not required that these ranges be followed precisely during the design process but it is likely a good practice to stay within them when considering an input set.

Table 3.Statistical performance of neural network.

Data type Number Upper Lower Emitter of base base resistor patterns resistor resistor Re Rb1 Rb2 (mse) (mse) (mse) Training 7500 1.334 5.16 0.116 set Test Set 1 5308 1.34 5.15 0.816 Test Set 2 6762 2.17 4.63 1.271 Test Set 3 8396 1.53 3.34 0.983 Test Set 4 10326 1.68 4.84 1.177 21

Chapter 4

Conclusions

This work shows that it is possible to train a neural network to solve electrical circuits for practical purposes. Bipolar junction transistor amplifier circuits of different topologies have different biasing components and the values of these components depend on different specifications. The three different topologies are the common emitter, common collector and common base amplifiers. The parameters necessary for the design of every BJT amplifier circuit include: voltage gain (AV), input resistance (RIN), output resistance (ROUT) and the collector to emitter voltage which determines whether the amplifier is operating in the linear region. This thesis focused on the common collector amplifier which has a voltage gain of approximately 1. This topology is useful as a voltage buffer because of its voltage gain. The work in this thesis managed to illustrate how useful neural networks can be in calculating resistor biasing values.

BJT amplifier circuits are usually designed and analyzed by Electrical Engineering students in their junior year. A neural network that can accept the necessary parameters for a BJT transistor circuit and provide all the acceptable biasing resistor values is very useful. Someone referencing this work would have an easier time evaluating transistor amplifier circuits with open-ended parameters by comparing their results to those of the neural network. 22

Appendix A

MATLAB Code for Training

% Common Collector spring 2017 % this does not include source impedance or the load impedance % those are handled separately with as input and output impedances clc clear Vt = 0.025; % thermal voltage Vcc = 10; % supply voltage beta = 160; % transistor beta mat_size = 7500; Rb1_vec = zeros(1,mat_size); % base resistor vector Rb2_vec = zeros(1,mat_size); % base resistor vector Re_vec = zeros(1,mat_size); % emitter resistor vector

Rin_vec = zeros(1,mat_size); % Rin vector Ro_vec = zeros(1,mat_size); % Ro vector Vce_vec = zeros(1,mat_size); % Vce vector count = 1; % vector index maxRb1 = 10000; % upper resistor maxRb2 = 10000; % lower resistor maxRe = 1500; for Rb1 = 4000:250:maxRb1 for Rb2 = 4000:250:maxRb2 for Re = 400:100:maxRe Rb1_vec(count) = Rb1; % top base transsistor Rb2_vec(count) = Rb2; % bottom base transistor Re_vec(count) = Re; % emitter resistor

Vth = Vcc*Rb2/(Rb1+Rb2); % Thevenin voltage for base Rb = Rb1*Rb2/(Rb1+Rb2); % equivalent base resistance Ib = (Vth-0.7)/(Rb+(beta+1)*Re); % base current Ic = beta*Ib; % collector current Ie = (beta+1)*Ib; % emitter current r_pi = Vt/Ib; % base resistance 23 gm = Ic/Vt; % transconductance Vce = Vcc-Ie*Re; % Vce value re = r_pi/(beta+1); % emitter resistance Ro = re*Re/(re+Re);

if ((Vce >= 2) && (Vce <= 8)) Vce_vec(count) = Vce; % collector emitter voltage Rin_vec(count) = Rb*(beta+1)*(re+Re)/(Rb+(beta+1)*(re+Re)); % input impedance

Ro_vec(count) = Ro; % output impedance count = count+1; end end end end count=count-1

% provides the ranges for the requested values min(Rin_vec) max(Rin_vec) min(Ro_vec) max(Ro_vec) min(Vce_vec) max(Vce_vec) net = newff([min(Rin_vec) max(Rin_vec); min(Ro_vec) max(Ro_vec); min(Vce_vec) max(Vce_vec)],[14,3],{'tansig','purelin'},'trainlm'); net.trainParam.show = 5; net.trainParam.epochs = 2000; net.trainParam.goal = 1e-8; p = [Rin_vec; Ro_vec; Vce_vec];

% normalize the targets by each one's maximum value t = [Rb1_vec/maxRb1; Rb2_vec/maxRb2; Re_vec/maxRe]; net = train(net,p,t); save('weights_2','net'); % save the network weights to disk

24 Appendix B

Neural Network Testing

% Run the network from the trained data % First need to generate the original data % Use a different test set clc clear Vt = 0.025; % thermal voltage Vcc = 10; % supply voltage beta = 160; % transistor beta mat_size = 8316; Rb1_vec = zeros(1,mat_size); % base resistor vector Rb2_vec = zeros(1,mat_size); % base resistor vector Re_vec = zeros(1,mat_size); % emitter resistor vector

Rin_vec = zeros(1,mat_size); % Rin vector Ro_vec = zeros(1,mat_size); % Ro vector Vce_vec = zeros(1,mat_size); % Vce vector count = 1; % vector index maxRb1 = 10000; % upper resistor maxRb2 = 10000; % lower resistor maxRe = 1500; for Rb1 = 4000:220:maxRb1 for Rb2 = 4000:230:maxRb2 for Re = 400:110:maxRe Rb1_vec(count) = Rb1; % top base transistor Rb2_vec(count) = Rb2; % bottom base transistor Re_vec(count) = Re; % emitter resistor

Vth = Vcc*Rb2/(Rb1+Rb2); % Thevenin voltage for base Rb = Rb1*Rb2/(Rb1+Rb2); % equivalent base resistance Ib = (Vth-0.7)/(Rb+(beta+1)*Re); % base current Ic = beta*Ib; % collector current Ie = (beta+1)*Ib; % emitter current r_pi = Vt/Ib; % base resistance gm = Ic/Vt; % transconductance Vce = Vcc-Ie*Re; % Vce value re = r_pi/(beta+1); % emitter resistance Ro = re*Re/(re+Re);

if (Vce >= 2) && (Vce <= 8) 25 Vce_vec(count) = Vce; % collector emitter voltage Rin_vec(count) = Rb*(beta+1)*(re+Re)/(Rb+(beta+1)*(re+Re));

Ro_vec(count) = Ro; % output impedance count = count+1; end end end end count=count-1 load('weights_2','net'); % load the network weights from disk pattern = [Rin_vec; Ro_vec; Vce_vec]; % build a pattern matrix %from all of the important %features result = sim(net,pattern); % run the network with the %outputs from the %calculations

% the result will be the resistor values

% this is to test the results

% need to rescale the results back to their normal values result(1,:) = result(1,:) * maxRb1; result(2,:) = result(2,:) * maxRb2; result(3,:) = result(3,:) * maxRe;

% calculate the mean squared error err_Rb1 = mean((Rb1_vec-result(1,:)).^2) err_Rb2 = mean((Rb2_vec-result(2,:)).^2) err_Re = mean((Re_vec-result(3,:)).^2) figure(1) plot((Rb1_vec-result(1,:))); % plot the absolute value of the % base 1 difference title('Rb1') figure(2) plot((Rb2_vec-result(2,:))); % plot the absolute value of the % base 2 difference title('Rb2') figure(3) plot((Re_vec-result(3,:))); % plot the absolute value of % emitter difference title('Re') 26

BIBLIOGRAPHY

[1] Nielsen, Michael A. "Neural Networks and Deep Learning." Neural networks and deep learning.

Determination Press, 01 Jan. 2017. Web. 20 Feb. 2017. .

[2] "Neural Networks - History." Neural Networks - History. Stanford University, n.d. Web. 20 Feb.

2017. .

[3] Hemminger, Thomas L. "A Real-Time Neural-Net Computing Approach to the Detection and

Classification of Underwater Acoustic Transients." Case Western Reserve University (1992): 1-6. Web.

22 Feb. 2017. .

[4] Kendall, Graham. "Introduction to Artificial Intelligence." G5AIAI : Neural Networks : Neural

Networks. N.p., n.d. Web. 02 Apr. 2017.

.

[5] Pao, Yoh-Han. "Chapter 5 - Learning Discriminants: The Generalized Perceptron." Adaptive Pattern

Recognition and Neural Networks. Reading, Mass.: Addison-Wesley, 1994. N. pag. Print.

[6] Hu, Chenming C. Modern semiconductor devices for integrated circuits. Upper Saddle River, NJ:

Prentice Hall, 2010. Print.

[7] "Introduction to Bipolar Junction Transistors (BJT)." All About Circuits. N.p., n.d. Web. 21 Feb.

2017. .

[8] Sedra, Adel S., and Kenneth C. Smith. Microelectronic circuits. New York: Oxford U Press, 2015.

Print.

[9] Hemminger, Thomas L. “A Neural Network Approach to Transistor Circuit Design.” Pennsylvania

State University (2016): 1-3. Print. 27

ACADEMIC VITA Daphne C. Cruz Hidalgo

Education The Pennsylvania State University - The Behrend College May 2017 Bachelor of Science in Electrical Engineering Schreyer Honors College Scholar Minor in Computer Engineering • Senior Capstone Project • Collaborating with FMC Technologies to create a device that will accelerate the process of calibrating flow meters for the oil and gas industry. • Customizing a logic analyzer to use with a BeagleBone Black, an embedded Linux device, to do real time processing of pulse input data from flow meters in a flow calibration facility. • Processed pulse data transmitted to a network using an MQTT protocol. • Design of a text fixture using Arduinos to create asynchronous pulses. • Relevant Academic Experience • Hardware and software design using microcontrollers for user/system interfaces, data acquisition and control; programmed in C. Experience with I2C, RS-232, CAN and LIN and SPI communication protocols. • Designed electronic circuits for amplification, filtering, A/D and D/A converters. • Developed PLC based control systems with HMI systems for operator interface using ladder logic on Allen-Bradley PLCs. • Work Experience Electrical Engineering Intern: Delphi Automotive - Warren, OH Summer 2016 Gained broad exposure to the design process and working with electronics. Designed a mechanization drawing, electronic circuit schematic, Printed Circuit Board (PCB) layout, hand assembled PCBs, created housing for the PCBs and testing boards for a project. Quality Assurance Intern: Fenetech Inc. - Aurora, OH Summer 2015 Executed regression, functional and performance testing on Fenevision: a software created for the window and glass making industry. Used SQL to query databases to ensure the software performed appropriate actions. Peer Educator: Penn State Behrend Health and Wellness Center - Erie, PA Fall 2015 Assisted professor with NURS197 (Spanish Nursing). Shared knowledge of my native language and culture to non-native Spanish speakers to help them excel in their profession. Translated portions of the book into Spanish to give students an immersive course experience. Peer Tutor: Penn State Behrend: Learning Resource Center - Erie, PA August 2014-Present Conducted one-on-one and group tutoring educational assistance in Electrical Engineering courses, Spanish, and Calculus. Led students through real world applications of these courses. Risk & Appraisal Analyst: Banco Continental - San Pedro Sula, Honduras Summer 2014 Assessed value of properties to analyze risk associated with acquiring land as collateral. Completed a study with a team of engineers to discuss department’s profitability. • Computer Skills C, C++, PSpice, Mentor Graphics, Assembly, MATLAB, Simulink, VHDL, Ladder Logic, Visio • Organizations and Awards The Honor Society of Phi Kappa Phi (multidisciplinary) Behrend Honors Scholar The Honor Society of Pi Mu Epsilon (mathematics) Society of Women Engineering, Member The Tau Beta Pi Association (engineering), Officer Dean’s List every semester • Cross Cultural Experience Fluent in Spanish, intermediate level of French Basic knowledge and understanding of German and Portuguese Travelled to many countries, exposed to different cultures in Europe, the Caribbean and Latin America