<<

Session ETD 525

A Real Exploration of Euler’s Imaginary i: Isomorphism and Applications to AC Circuit Theory

Andrew Grossfield, Ph.D., P.E. Vaughn College of Technology

Abstract

In elementary schools the students are taught that negative numbers do not have square roots. The appearance of a of a negative number in the course of a computation indicates that either the problem has no solution or an error has occurred. Subsequently students are told that negative numbers have “imaginary” square roots which can be constructed using the symbol i which represents the square root of –1. However, this reasoning appears logically inconsistent. There is nothing imaginary about the symbol i or its use.

This paper treats the following interesting topics in the theory of functions of a complex :

1) sensible introductions to Euler’s i that conform to the way engineers and technicians use the symbol in analyzing circuits and mechanically vibrating systems;

2) the derivation of the algebraic and topological features of the complex and a comparison of these features to the properties of “real” numbers;

3) the description of the isomorphism between and combinations of same-frequency sinusoidal oscillations that underlies the theory of alternating current analysis promoted and successfully used by C. P. Steinmetz for the distribution of electrical power throughout the United States;

4) the derivation of the linear 2-dimensional rotational mappings represented by a system of 2×2 matrices with real number entries. These mappings can represent complex numbers and serve as an alternative definition of the meaning of the symbol i.

This paper should provide a reasonable introduction to theories of alternating currents and vibrations and encourage further study of the theory of complex variables.

Background

The history of the development of the “real” number system began with the positive whole numbers to which zero and the positive rational fractions were added. These numbers were represented by points to the right of the on the “real” number . The points to the left of the origin on the line were added with the inclusion of the minus ( – ) symbol. Multiplying any number on the horizontal positive axis by –1 reversed its direction. The new system together with the irrational numbers included all the points from – ∞ to + ∞ on the 1-dimensional “real” number line.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

To create a 2-dimensional a symbol is needed which on would lift points off the horizontal “real” axis. This conventional symbol is i which has the property that it rotates vectors by 90º counterclockwise. In this system the Cartesian form, a + bi of a can be constructed by adding the horizontal value a to the value b rotated by 90º counterclockwise. On the real number line this is the same construction that is capable of producing negative numbers by adding the positive value a to the positive value b rotated by 180º counterclockwise, that is, a + b × (–1).

The value i times i has the effect of adding two 90º counterclockwise rotations producing a rotation of 180º which is the same as multiplication by –1; that is, it produces a reversal in direction. Taking this as a starting , all the properties of complex numbers can be derived which will clarify the conceptualization and the computation of complex numbers. There is nothing imaginary here. See figures 1, 2, and 3.

0 + i 0 + i

i × i = -1 1.0 1.0

Figure 1 1 × i =90° counterclockwise rotation Figure 2 i × i =180° rotation

y

 3 + 2i

3 + 2 3 + 2i*i = 3 - 2 x

    

Figure 3 Locating the point 3 + 2i

Note: While complex numbers add like 2-dimensional vectors they do not multiply like vectors. Vectors can be n-dimensional while the complex numbers are inherently 2-dimensional. Dot products of vectors are scalars; (that is, ordinary non-dimensional numbers) while cross products of vectors lie in a space to the plane of the multiplying vectors. Products and quotients of complex numbers lie in the same 2-dimensional plane as the complex factors. With this in mind it is better not to call the points in the complex plane “vectors.” Electrical engineers more appropriately use the words “phasors” or “impedances.” In this paper I will try to use the words “phasors” or “complex numbers” instead of vectors and additionally I will try to use the

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525 words “horizontal” and “vertical” component instead of the words “real” and “imaginary.” However, setting convention aside is not easy.

The 2-dimensional algebraic plane of complex numbers

The algebraic properties of the points (complex numbers, phasors) in the complex plane have a lot in common with our familiar ordinary numbers. Their algebraic properties permit the basic operations of addition, subtraction, multiplication and . As in the case of the ordinary numbers, the commutative, associative and distributive laws apply to the operations of addition and multiplication of complex numbers. Because these laws apply we need not worry when doing complex additions or about the order or grouping of the terms or factors.

There are situations where it is advantageous to complete the complex plane by appending an additional single point called “infinity” which results from division by zero. This point is constructed by identifying both ends of any line as this single point, thereby forming the line into a ring. This single point lies at both ends of every line in every direction beyond any bound. The addition of this single point allows the 2-dimensional flat plane to be mapped in a one-to-one correspondence onto a sphere. We must note that this complex number system differs from our conventional number system where division by zero is not permitted. At this point I strongly recommend that the reader view the marvelous video “Mobius Transformations Revealed.” 1

I should also note that while in the real number system negative numbers do not have square roots, in the complex number system every number except zero and infinity has two distinct square roots. In fact, every nth polynomial has n roots, not necessarily distinct.

The Cartesian and polar forms of complex numbers

There are two common forms for describing 2-dimensional vectors. The Cartesian form describes the vector in terms of its horizontal and vertical coordinates. The polar form describes the vector in terms of its from the origin or magnitude and the the vector makes with the horizontal axis. Modern scientific calculators provide the capability of converting between the two forms. Because the conversion requires two values for the computation and yields two values, the calculator manual should be consulted to see how a particular calculator model handles the separation of the arguments and the results. The conversion equations as seen in Figure 4 are:

Polar to Cartesian: (r, θ) → (x, y) Cartesian to Polar: (x, y) → (r, θ) y x = r cos(θ), y = r sin(θ) r = √x2 + y2, θ = arctan( ) x

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

 2 2 r = √ (x + y )

α = arctan(y/x) x = r cos(α) .    

Figure 4 Polar ⇔ Cartesian conversion

In the Cartesian form addition and subtraction of vectors is easy. Addition is simply performed by adding the horizontal components of the summands and then adding the vertical components. Similarly subtraction is performed by successively subtracting the co-ordinate components.

An application of addition of 2-dimensional vectors is found in mechanics in the computation of the resultant of given . The vector forces are given as magnitudes and , usually provided in polar form. To compute the resultant, convert the forces to Cartesian form and then add their horizontal and vertical components. Convert the sum back to polar form to obtain the magnitude and direction of the resultant .

Addition and subtraction of complex numbers in polar form are not very easy, but multiplication and division are.

The distributive law of numbers applies to the multiplication of complex numbers in Cartesian form: (a + bi)*(c + di) = ac + bci + adi + bd i*i .

Since multiplication by i is a rotation of 90º, the i*i of two rotations is a rotation of 180º or a multiplication by –1 so (a + bi)*(c + di) = ac – bd + (bc + ad)i

y i*(4 + 3i) = –3 + 4i 

4 + 3i 

 3

4 x − − − −      Figure 5 Multiplying the 4 + 3i by i induces a 90° counterclockwise rotation to –3 + 4i.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

We can now derive the rule for multiplication of complex numbers in polar form.

iα iβ (x1 + iy1)*(x2 + iy2) = r1 e * r2 e r1 r2 {cos α + i sin α}{ cos β + i sin β}

= r1 r2 {(cos α cos β – sin α sin β) + i (sin α cos β + sin β cos α)}

i(α +β) = r1 r2 {cos (α + β) + i sin (α + β)} = r1 r2 e .

We see that in polar form to multiply complex numbers simply multiply the magnitudes and add the angles. It can similarly be proven that division of complex numbers in polar form can be performed by dividing their magnitudes and subtracting their angles; 푖α x1 + iy1 r1e r1 i(α – β) ( ) = 푖β = e x2 + iy2 r2e r2

Now we should note that while complex numbers are inherently 2-dimensional objects, they can be manipulated to form functions almost identically to the way the ordinary “real” numbers are manipulated. Following are examples of functions of complex arguments which are extensions of the ordinary functions studied in differential and integral . While it appears that ordinary algebraic variables are being manipulated, these functions or mappings are actually assigning points in the 2-dimensional w-plane to points in the 2-dimensional z-plane. w = az + b w = ln(z)

az+b w = sin(z) , w= arcsin(z) w = cz+d w = cos(z) , w= arccos(z) w = z2 , w = √z w = tan(z) , w = arctan(z) w = zn . w = 푛√푧 w = e z = e (x + iy) = e x e iy

The reflection in the horizontal axis x – iy of a complex number z = x + iy is called the conjugate of z and is represented by the symbol z*. The conjugate of z can be used to extract the horizontal and vertical components of z and the magnitude of z.

z+z∗ z−z∗ Re(z) = Im(z) = and |z| = √zz∗ = √x2 + y2 2 2i

Scientific calculators usually provide the operations; Re(z), Im(z), |z| and arg(z).

A comparison of the order and metric properties of complex numbers

In the real number system, numbers are ordered by their position on the number line. If two real numbers are not equal then one of them is further to the right along the line and the distance between them is the absolute value of their difference. In the complex number system the distance between two points is still the absolute value of their difference. However, two numbers like 3 + 4i and 4 + 3i have the same size but are not the same.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

Isomorphism

The concept of isomorphism pervades modern mathematics. The idea is that we see two different sets of objects called domains which correspond, map or transform one-to-one both ways. Operations on the objects in either domain can be performed and the images of the results of the operations in either domain correspond to the results of the operations in the other domain. The value of the idea is that there are times when it is easier to see or do The Logarithm Isomorphism things in one domain than in the other. When computations are The originating objects are the numbers, say N and M easier in one domain we should The transformed objects are called logarithms transform to that domain, do the corresponding computation and Positive ⇔ Logarithms of then transform back. Numbers positive numbers

Examples of isomorphism include Objects logarithms, and 1 ⇔ 0 Laplace transforms. Logarithms map multiplication and division to N ⇔ Log(N) addition and subtraction. Fourier M ⇔ Log(M) series map periodic functions to pairs of sequences. And Laplace transforms are linear mappings Operations which convert linear differential × ⇔ + equations to linear algebraic equations. Euler employed the ÷ ⇔ – concept in his study of generating 1 functions and C. P. Steinmetz ⇔ – Log(N) N promoted the concept with his use p of phasors to perform alternating N ⇔ p Log(N) current analysis. While near the 1/q 1 beginning of the 20th century, N ⇔ Log(N) Edison was immorally vilifying q alternating current as dangerous, ultimately economics dictated that the national standard of electrical power distribution would be alternating current which of course was found to be not unreasonably unsafe.

The study of Euler’s i will be continued in the remainder of this paper by examining three examples of isomorphism.

1) Sinusoidal signals ⇔ phasors 2) AC Circuit analysis – sinusoidal voltages and currents; capacitance, inductors and resistors ⇔ voltage and current phasors and impedances 3) Complex numbers ⇔ 2×2 matrices representing linear transformations of 2-dimensional spaces.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

The phasor isomorphism of same-frequency sinusoidal signals

A sinusoidal signal is an oscillation having a magnitude, a frequency and a shift. The algebraic notation for a voltage oscillation is v(t) = C cos( 2πf t + θ) where C is the size or of the signal, f is the frequency in cycles per second and θ is the phase shift. If instead of cycles the angle is measured in , then ω = 2πf and the voltage signal would be written as v(t) = C cos(ω t + θ).

Imagine, the voltage is oscillating between the values +C and –C with a frequency f, and with positive peaks occurring when t = – θ/ ω . When two voltage generators are placed in series at every instant the voltage values add. When two oscillating currents enter a node through two wires and leave by a third, the current in the third node is the instantaneous sum of the original two currents. It doesn’t appear possible that there could be a simple way to conceive of and compute sums and differences of sinusoidal oscillations. Definitely it was beyond the comprehension of Edison. Charles Proteus Steinmetz was a hunchback German immigrant mathematician who understood the way. He used the first of the following principles:

The addition of two or more sinusoids of the same frequency results in a sinusoid of the same frequency. As an example see figure 6.

Adding two or more sinusoids of different frequencies does not produce a sinusoid but instead produces a signal called a complex waveform. As an example see figure 7.

Graphs of these principles can be also displayed with any graphing calculator.

y = sin(x) + cos(x) = 1.414 cos(x - /4)

y = sin(x) + sin(2x) y = sin(x) y =cos(x) y = sin(x) y =sin(2x)

Figure 6 Adding same-frequency sinusoids Figure 7 Adding different-frequency sinusoids

And he used an additional principle: at a given frequency a sinusoidal oscillation is completely determined by its peak value and the time of occurrence of the peak or the phase shift. The peak value and the angle of the phase shift could be used as the polar form of a 2-dimensional vector.

The cosine would be associated with the vector [1, 0] or in polar coordinates 1 /0 . The vectors of sinusoids whose peaks occurred before the cosine would be plotted in the first and second quadrants and lagging sinusoids would be plotted in the third and fourth quadrants. In this

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525 scheme since the lags the cosine by 90º so the vector of the sine points downward. If complex number notation is used to represent sinusoids then the vector for a in this system will be 0 – i or 1 / – 90º. These vectors or complex numbers which represent sinusoidal waveforms are called phasors. The graphs of an unshifted sine and an unshifted cosine are displayed in figure 6. Their sum is seen to lag the cosine by 45º.

The trigonometric formula for the cosine of the sum of two angles is: cos( α + β ) = cos α cos β – sin α sin β . If we let α = ω t and β = θ , then

C cos(ω t + θ ) = C cos θ cos ω t – C sin θ sin ω t

= A cos ω t + B sin ω t Equation 1 where A = C cos θ and B = C sin θ . The phasor notation for Equation 1 is C / θ = A – B i .

Equation 1 can be restated to become any sinusoidal signal of amplitude C, frequency ω and delayed by a phase shift θ can be decomposed into two components of the same radian frequency, one component an unshifted cosine of amplitude C cos θ and the other an unshifted sine of amplitude C sin θ.

The Same-Frequency Oscillation ⇔ Phasor Isomorphism

The originating objects are the sinusoidal oscillations of the form, v(t) = C cos( 2πf t + θ) The transformed objects are complex numbers called phasors

Sinusoids phasors v1(t) =C1 cos( 2πf t + θ1) ⇔ C1/θ1 = A1 + B1i = C1 cos θ1 – C1 sin θ1 i v2(t) =C2 cos( 2πf t + θ2) ⇔ C2/θ2 = A2 + B2 i = C2 cos θ2 – C2 sin θ2 i

Linear combinations of the sinusoids transform to linear combinations of the phasors

a v1(t) + b v2(t) ⇔ aA1 + bA2 – ( aB1 + bB2 )i ⇔ (aC1 cos θ1 + bC2 cos θ2) – (a C1 sin θ1 + b C2 sin θ2 )i

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

The Same-Frequency Oscillation ⇔ Phasor Isomorphism

The originating objects are the sinusoidal oscillations of the form, v(t) = C cos( 2πf t + θ) The transformed objects are complex numbers called phasors

Sinusoids Phasors

y = sin(x)

y = cos(x)

y = 1.414 cos(x - 45 °) y = cos(x) + sin(x)

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

The rest is simple. To add a number of sinusoids of the same frequency, given their peak values and phase shifts, treat them as vectors. The resultant is the respective sums of their horizontal components and their vertical components. There is no need to consider during the solution of the circuit the values of the oscillations at every instant of time. The values at any particular instant of time can be obtained from equation 1.

The DC linear circuit

DC linear electric circuits are wiring configurations of resistors and constant current or voltage sources. The connection points are called nodes. The paths that connect the nodes are called branches. Resistor values are positive but voltage and current sources have a magnitude and a sign that indicates direction. Voltage is the ability to propel electric charge through a resistor. One node called ground is taken to be at 0 volts and then the other nodes have a single voltage value measured with respect to ground. The current in a branch can have only one value.

Charge cannot accumulate at a node and therefore at every instant the sum of the currents entering a node must equal the sum of the currents leaving the node. If one starts at a node and algebraically adds or subtracts the voltage across each component in some path that terminates at the originating node, the result must be zero. The conventional DC circuit problem is to find how the currents are distributed, given the wiring configuration, the resistor values and the type, locations and values of the sources. The equations that describe this problem are all of the first degree in the voltages and currents and are solvable with the rules of algebra. In other words the equations that describe DC circuits are linear.

Electrical engineers learn various simplifying concepts and computational techniques that apply to linear circuits. These techniques, such as component combination, voltage and current division, nodal analysis, mesh analysis, superposition and Thevenin’s equivalence are treated in texts on circuit analysis. We will soon see that even though in AC circuits the current and voltage waveforms are oscillating, the equations are also linear implying that the concepts and computational methods which worked for DC circuits might also work for AC circuits.

The steady-state AC linear circuit isomorphism of Steinmetz

The study of alternating current theory starts by examining circuits where all the voltage and current sources are sinusoidal and have the same frequency. If the circuit components are “linear” then all the signal responses, voltage and current, will be sinusoidal and have the same frequency as the sources. The linear components in simple AC circuits are of three kinds: resistors, inductors and capacitors. In DC circuits the voltage and current signals are constant causing both the voltage across an inductor and the current through a capacitor to be zero. Therefore in DC circuits inductors can be replaced by wires with zero resistance or shorts and capacitors can be removed from the circuit without affecting any other signal response. This means DC circuits are comprised of only constant current and voltage sources and resistors.

Inductors and capacitors store energy, while currents in resistors generate heat. This heat energy leaves the circuit. When a switch is closed or opened energy initially stored as capacitor voltages and inductor currents produces transient signals that ultimately will decay. Our initial interest is

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525 in finding the sinusoidal voltages and currents that remain after the magnitudes of these transients become small enough to neglect. These remaining sinusoidal signals are called steady- state. Solutions for general time-varying signals that include decaying transients will found later by means of Laplace transforms, which are another isomorphic technique.

The equations defining the behavior of circuit components are called the constitutive equations. The constitutive equations of the three simple linear components in time-varying circuits with voltage and current signals, v(t) and i(t) follows: di 1) Inductors with inductance L: vL(t) = L Equation 2 dt

2) Resistors with resistance R: vR(t) = R i(t) Equation 3

1 3) Capacitors with capacitance C: vC(t) = i(t) dt Equation 4 C ʃ Equations 2, 3 and 4 can be summed up as:

1) Inductors induce voltages proportional to the waveforms of the current signals through them. 2) The voltages across resistors have the same shape as the current waveforms though them but scaled up or down by the value of R. 3) The voltages across capacitors are proportional to the integral waveforms of the current signals through them. The circuits we are studying are special in that the signals are not general time-varying signals but are all sinusoids of the same frequency. A notation is needed that will enable us to write the equations describing the steady-state signal responses in circuits made up of inductors, resistors and capacitors that are driven by single-frequency sinusoidal sources.

Note: Electrical engineers use i to mean current and use the symbol j in place of i to mean the complex number 90º counter-clockwise rotation operator. In the following this notation will be used. Also the symbol C will be used to mean capacitance value, not signal amplitude.

Since the source and response signals in these circuits are sinusoidal and the frequency is fixed all the signals can be represented as phasors in a 2-dimensional complex plane. The equation of the derivative of a general sinusoid is:

d A cos (ωt + θ) = – ω A sin (ωt + θ) Equation 5 dt

In phasor notation equation 5 is written as

d A / θ = ω A / θ + 90º = jω A / θ . dt

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

This means that the phasor of the derivative of a sinusoidal signal is ω times the size of the original phasor and rotated 90º counterclockwise.

As an example say the original signal size is 1 volt and the original radian frequency ω = 1000 rad/sec. One could wonder why the derivative has the surprisingly large size of 1,000 volts per second. The answer is that this signal is oscillating 1,000 faster and has much less time to reach its peak. The in the derivative must be 1,000 times larger.

Integration is the inverse operation of differentiation. The integration of a sinusoid has the effect of dividing the signal amplitude by ω and delaying the sinusoid by 90º, that is;

A A ʃA cos(ωt + θ) = sin (ωt + θ) = cos (ωt + θ – 90º) Equation 6 ω ω

A The phasor equation corresponding to equation 6 is ʃ A /θ = /θ . jω

In steady-state AC circuit analysis if the phasors for voltage and current are V and I then the constitutive equations of the components; R, L, and C become:

1 VL = jωL IL ; VR = R IR and VC = IC Equations 7 jωC

In DC circuits R is an ordinary positive number which multiplies current to produce voltage. The analogue to resistance in steady state AC circuits is called impedance and is represented by the 1 symbol Z. The symbol, ZL= jωL represents inductive impedance and the symbol ZC = jωC represents capacitive impedance. The complex number impedance multiplies the complex current phasor to produce the corresponding voltage phasor across the component. Equations 7 can be written as:

VL = ZLI, VR = RIR and VC= ZCIC.

One should pause to wonder that while ordinarily apples are not added to oranges, the impedances of linear components wired in series can be added. While sinusoidal oscillations cannot be divided, complex number arithmetic allows a voltage phasor which is a complex number, to be divided by a current phasor, a complex number, to produce an impedance which is a third complex number.

The sizes of the inductor impedance XL and capacitor impedances XC are called reactances. The AC constitutive equations can now be written V = jXL I, V = R I and V= – jXC I. where 1 XL = ωL and XC = . The reactance of an inductor is positive, while capacitive reactance is ωC negative.

Now it will appear that with these new concepts and notations and since complex arithmetic allows for addition, subtraction, multiplication and division, AC analysis can be carried out

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525 similarly to DC analysis. As examples the laws of DC and AC analysis will be written below in order to emphasize the parallelism. The signals in all linear circuits obey the superposition principle.

Configuration laws

In both AC and DC circuits the instantaneous algebraic sum of currents entering a node is zero. In both AC and DC circuits the instantaneous algebraic sum of voltages drops around a loop is zero. In AC circuits the algebraic sum of current phasors entering a node is zero and the algebraic sum of phasor voltage drops around a loop is zero.

Component combination laws

In DC circuits the total resistance of resistors placed in series is the sum of the values of the individual resistors. In AC circuits the total impedance of impedances placed in series is the algebraic sum of the values of the individual impedances. It must be kept in mind that when wired in series, the negative reactance of a capacitor cancels the positive reactance of an inductor. R1R2 In DC circuits the equivalent resistance of two resistors placed in parallel is Req = . In R1+ R2 Z1Z2 AC circuits the equivalent impedance of two parallel impedances is Zeq = . Z1+Z2 R2Vin In DC circuits the law of voltage division is Vo = . The corresponding law in AC circuits R1+R2 Z2Vin is Vo = Z1+Z2

Correspondingly in both AC and DC nodal and mesh analysis concepts and the technique of Thevenin’s equivalence prevail.

Circuits often have more than one signal source. In linear circuits both DC and AC the total response of all the sources can be computed by adding the responses of each of the sources separately. This property of linear circuits is called superposition. The superposition principle prevails even when the sources have different frequencies.

Linear Transformations or Mappings between 2-dimensional linear spaces

Now we are continuing to the last example of an isomorphism. This isomorphism is the correspondence between rotational mappings of 2-dimensional linear spaces and complex numbers. These rotational mappings provide a second, real definition of Euler’s “imaginary” i.

Mappings assign points in one space called the domain D to a second space called the range R. To obtain this second definition of i, let both the range and domain be 2-dimensional and have Cartesian coordinate systems. Consider the points of the spaces D and R as vectors with domain coordinates (x, y) and range coordinates (u, v). The equations of these 2-dimensional first degree or linear mappings have the algebraic form:

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

u = Ax + By u A B x or in matrix-vector form [ ] = [ ] [ ] v = Cx + Dy v C D y

Later on we will need an important property of the matrix called the determinant. The determinant of a matrix is a of its entries:

A B Det [ ] = AD – BC C D Basis vectors

In order to locate points in general 2-dimensional non-linear spaces, two families of intersecting curves can be used to establish a . If a point lies on one curve in each of the two families the parameters of the families determine the location of each particular point. As an example consider the . It is non-linear. The two curve families are a set of concentric about the origin and a pencil of straight lines through the origin. A point is located by giving its distance, r to the origin and the angle, θ a line in the pencil through the point makes with the horizontal. On the Earth’s surface, which is another non-linear space, the coordinate curves are the lines of and .

The points in a linear space are identified with vectors that are attached to the origin. A coordinate system can be established based on any two non-parallel vectors which are called basis vectors. Any point in a flat linear space can be reached by forming a linear combination of the basis vectors, that is if the basis vectors are 퐀̅ and 퐁̅ any point, 퐏̅ can reached by multiplying 퐀̅ and 퐁̅ by suitable numbers and then adding. Say 퐏̅ = a 퐀̅ + b 퐁̅. Then the coordinates of 퐏̅ are [a, b]. The coordinate curves in a linear space are straight lines, parallel and uniformly spaced. Rarely in courses of linear algebra are coordinates mentioned; the concepts of basis vectors and vector components better suit the need.

In many cases the most suitable choice for basis vectors produces our ordinary Cartesian coordinate system. The basis vectors have length 1 and are perpendicular. This system is called “orthonormal.” In such a system the of the parallelogram spanned by two vectors 퐀̅ = [x1, y1] and 퐁̅ = [x2, y2] is the value of the determinant of the matrix formed from the vectors:

x1 y1 Det [ ]= x1y2 – x2y1 x2 y2

If 퐀̅ and 퐁̅ are not zero, the two vectors are co-linear when their determinant has a value of zero.

The reader should try to show that linear mappings with non-zero determinants have the following properties:

The origin maps to the origin. Straight lines map to straight lines. Parallel lines map to parallel lines. Uniformly spaced lines map to uniformly spaced lines. If two vectors in the domain are added the image of the sum is the sum of the images of the original vectors. Note: This is the principle of linearity.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

Bounded regions map to bounded regions. Linear mappings are inherently 1st degree and can be represented by matrices. Linear mappings preserve degrees of algebraic curves. Continuous smooth curves map to continuous smooth curves. If two curves in the domain have a point of tangency then the images of the curves are tangent at the image of the point of tangency.

From the above it follows that the images of parallelograms in the domain are parallelograms in the range. It is to be expected that a linear mapping from a 2-space to a 2-space will map square regions to rotated parallelograms and a inscribed in a square will map to an inscribed in the image parallelogram. See figure 10. If both the domain and the range of a A B mapping [ ] are orthonormal then the determinant of the mapping AD – BC provides the C D ratio of the area of the image to the area of a figure in the domain.

The entries in the matrix depend on the choice of basis vectors in both the range and domain. If the basis vectors of either of the spaces are changed, the entries of the matrix will change. A particularly nice choice of the basis vectors results in a matrix with non-zero entries only on the main diagonal. The equations of the mapping in matrix-vector form

u A 0 x u = Ax [ ] = [ ] [ ] are v 0 D y v = Dy

u v A diagonal matrix can be easily solved to find the inverse mapping: x = and y = . When a A D mapping has a diagonal matrix it means that the first domain basis vector maps to a multiple of the first range basis vector and that the second domain basis vector maps to a multiple of the second range basis vector. This statement holds for mappings of higher .

There are times when the mapping describes the movement of points in a single space. In that case the range and domain spaces have the same basis vectors. A list of special diagonal mappings of a space into itself follows:

1 0 The mapping [ ] leaves every point fixed and is called the identity. 0 1 −1 0 1 0 The mapping [ ] = − [ ] reverses the direction of the vector to every point. 0 −1 0 1 a 0 1 0 The mapping | | = a [ ] acts as a scalar in multiplying the size of every vector either 0 a 0 1 maintaining or reversing the direction of a vector depending on the sign of a.

a 0 The mapping A = [ ] horizontally stretches or contracts a circle at the origin depending on 0 1 whether the value of a is larger or less than 1. Circles map to .

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

1 0 The mapping B = [ ] vertically stretches or contracts a circle at the origin depending on 0 b whether b is larger or less than 1. Circles map to ellipses.

These diagonal matrices move the points in the basis directions to other points in the basis directions. The directions of vectors which are not basis vectors change when the entries on the diagonal are not equal. The off-diagonal components of the matrix of a mapping alter the directions of the domain basis vectors.

However, usually matrices are not diagonal and so the directions of most vectors change under general linear mappings. However, even when the matrices of 2-dimensional mappings have non-zero off-diagonal entries, there can be two directions called eigenvectors where the directions do not change. When a basis for a linear space can be comprised of eigenvectors then the matrix representing the mapping will be diagonal. The entries on the diagonal are called eigenvalues. The command EIGSHOW in the MATLAB programming language provides a visualization of eigenvectors.

If the matrix represents the quadratic form of a rotated then the eigenvectors point in the directions of the major and minor axes of the conic section.

1 0.5 The matrix [ ] represents a mapping with only one eigenvector in the direction [1, 0]. The 0 1 point on the vertical axis [0, 1] maps to the point [0.5, 1] which is not on the vertical axis and so is not an eigenvector. See figure 11. This kind of mapping where the points on a set of parallel lines slide along the same lines is called a shear.

0, 1 0.5, 1 1, 1 1.5, 1 0, 1 0.5, 1 1.5, 1

.5, .5 .75, .5 1 .5 M = 1 0.5 [ 0 1 ] M = 1, 0 1, 0 [ 0 1 ]

Figure 10 A linear transformation on a circle Figure 11 A shear transformation

0 −1 The mapping [ ] is a kind of linear mapping which produces a 90º counterclockwise 1 0 rotation of the plane. With orthonormal basis vectors, squares and circles will map to squares and circles.

This mapping is seen to map the vector [1, 0] to the vector [0, 1] and the vector [0, – 1] to the vector [1, 0] implying both vectors are rotated 90º counter clockwise. Since every point in the plane can be constructed as a linear combination of the vectors [1, 0] and [0, – 1], the entire plane is rotated 90º counterclockwise. Observe that a rotation mapping has no eigenvectors.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

a −b a 0 0 −b The mapping [ ] can be obtained by adding the scalar matrix [ ] to [ ] which is b a 0 a b 0 b times the 90º counterclockwise rotation.

a b − a −b √ 2 2 √ 2 2 [ ] = √푎2 + 푏2 [ a +b a +b ] b a b a √a2+b2 √a2+b2 cos (θ) −sin (θ) cos (θ) −sin (θ) = √a2 + b2 [ ] = r [ ] sin (θ) cos (θ) sin (θ) cos (θ)

a b where cos θ = ; sin θ = and r = √a2 + b2 . √a2+b2 √a2+b2 Now we are positioned to see the isomorphism between the a + bi complex number system and a −b the system of rotational 2×2 matrices [ ] . If two complex numbers are added or b a subtracted their corresponding matrices will add or subtract. In addition, the product of two complex numbers matches the product of their corresponding matrices:

a −b c −d ac − bd −(bc + ad) (a + bi)(c + di) = ac –bd + (ad +bc)i ⇔ [ ] [ ] = [ ] . b a d c bc + ad ac − bd

Likewise the quotients of complex numbers and the quotients of their corresponding matrices also match.

a −b The matrix form [ ] corresponds to the Cartesian form of complex numbers and the matrix b a cos (θ) −sin (θ) form r [ ] = r /θ corresponds to the polar form of complex numbers. sin (θ) cos (θ)

When the polar forms of matrices of complex numbers are multiplied the product is:

cos (α) −sin (α) cos β) −sin (β) r [ ] × r [ ] 1 sin (α) cos (α) 2 sin (β) cos (β)

cos α cos β – sin α sin β – (cos α sin β + cos β sin α) = r r [ ] 1 2 cos α sin β + cos β sin α cos α cos β – sin α sin β

cos(α + β) − sin(α + β) = r r [ ] Equation 8 1 2 sin(α + β) cos(α + β)

Equation 8 verifies that for both polar matrix forms and complex numbers multiplication is accomplished by multiplying magnitudes and adding angles.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

We should observe that , the 18th century master of series forms, on comparing the Taylor series forms of the functions eθ, sin(θ) and cos(θ) discovered the amazing identity: eiθ = cos(θ) + i sin(θ) . This identity states that the unit hypotenuse of a right is the vector sum of the legs. See figure 4. The following notations are commonly used for the polar form of complex numbers although the first which is used by electrical engineers is the simplest:

cos (θ) −sin (θ) r /θ , r eiθ , r{ cos(θ) + i sin(θ) } and r [ ] sin (θ) cos (θ)

Two real definitions of Euler’s i

1. On the second page of this paper Euler’s i was defined as an operator which has the effect on multiplication of rotating complex numbers 90º counterclockwise. Since complex numbers are inherently 2-dimensional there is nothing imaginary about such an operator.

2. As a second real definition Euler’s i could be defined as a symbol representing the 2×2

0 −1 matrix [ ] which has the property that 1 0

0 −1 0 −1 −1 0 i × i = [ ] × [ ] = [ ] = –1. 1 0 1 0 0 −1

Employing a symbol like i or j to replace matrix equations is to be viewed as a significant improvement in the reading, writing and conveying of concepts represented in these equations. And so here too there is nothing imaginary about the use of the symbol i.

Summary

This expository paper began as an exploration of the meaning of the Euler’s symbol i. Two real interpretations of i were discussed; one at the beginning of the paper and the other at the end. They were

1) a multiplication operator which rotated complex numbers by 90º counterclockwise and 0 −1 2) an anti-symmetric matrix, [ ]. 1 0 The center of the paper was devoted to two applications of complex numbers which were rooted in the important mathematical concept of isomorphism. This simple concept should be introduced to STEM students early in their studies, certainly during the introduction of logarithms; perhaps during discussions of rational fraction and repeating decimal forms.

Isomorphism is a simple basic concept involved in the theories of logarithms, Taylor and Fourier series, phasor and complex analysis, vibration theory, Laplace transforms and more. This concept is too important to be hidden in the shadows of advanced mathematics.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education Session ETD 525

The two applications of the isomorphism concept were: 1) The beautiful relationship between linear combinations of sinusoidal, same-frequency waveforms and points in the 2-dimensional complex plane of phasors

A cos (ωt + θ1) + B cos (ω t + θ2) ⇔ A / θ1 + B / θ2 and

2) The relationship in the analysis of alternating current circuits between AC signals and components and complex phasors and impedances. Differentiating sinusoids causes their corresponding phasors to advance by 90° and integrating sinusoids delays their corresponding phasors by 90°. While inductors, resistors and capacitors in series cannot be added, their complex number impedances can be combined. The algebra of complex numbers enables the capability of computing how currents are distributed and the values of voltages in circuits with oscillating signals.

While this paper began in search of clarification of Euler’s i, related topics in advanced analysis were discovered that are destined to belong in every STEM student’s analytical toolkit.

References

1. Douglass, A. (2007, June 03). Moebius Transformations Revealed. Retrieved from https://www.youtube.com/watch?v=JX3VmDgiFnY 2. Brown, J. W., & Churchill, R. V. (2014). Complex variables and applications. New York, NY: McGraw- Hill. 3. Knopp, K. (1948). Theory of Functions: Konrad Knopp. New York: Dover Pubns. 4. Saff, E. B., & Snider, A. D. (2017). Fundamentals of complex analysis engineering, science, and mathematics. Pearson. 5. Grossfield, A. (1999). Mathematical forms and strategies. Paper presented at the ASEE Annual Conference. 6. Grossfield, A. (2005). Are functions real? Paper presented at the ASEE Annual Conference. 7. Grossfield, A. (2009). Visual analysis and the composition of functions. Paper presented at the ASEE Annual Conference. 8. Grossfield, A. (2001). Freedom, constraint and control in multivariable calculus. Paper presented at the ASEE Annual Conference. 9 Grossfield, A. (2016). Calculus without limits. Paper presented at the CIEC Conference. 10 Grossfield, A. (2018). Partial and tilted planes in three dimensions. Paper presented at the CIEC Conference. 11. Nelson, R.B. (1993). Proofs without words: exercises in visual thinking. Mathematics Association of America, Washington. 12. Nelson, R.B. (2000). Proof without words ii: more exercises in visual thinking Mathematics Association of America, Washington.

Biography

ANDREW GROSSFIELD Throughout his career Dr. Grossfield has combined an interest in engineering design and mathematics. He earned his BEE at CCNY. Seeing the differences between the mathematics memorized in schools and the math understood and needed by engineers has led him to a career presenting alternative mathematical insights and concepts. He is licensed in NYS and belongs to the MAA, the ASEE and the IEEE.

Proceedings of the 2019 Conference for Industry and Education Collaboration Copyright ©2019, American Society for Engineering Education