Elementary algebra aei From Wikipedia, the free encyclopedia Contents

1 Additive identity 1 1.1 Elementary examples ...... 1 1.2 Formal definition ...... 1 1.3 Further examples ...... 1 1.4 Proofs ...... 2 1.4.1 The additive identity is unique in a group ...... 2 1.4.2 The additive identity annihilates ring elements ...... 2 1.4.3 The additive and multiplicative identities are different in a non-trivial ring ...... 2 1.5 See also ...... 2 1.6 References ...... 2 1.7 External links ...... 3

2 Additive inverse 4 2.1 Common examples ...... 4 2.1.1 Relation to subtraction ...... 4 2.1.2 Other properties ...... 4 2.2 Formal definition ...... 5 2.3 Other examples ...... 5 2.4 Non-examples ...... 6 2.5 See also ...... 6 2.6 Footnotes ...... 6 2.7 References ...... 6

3 Algebraic expression 7 3.1 Terminology ...... 7 3.2 In roots of polynomials ...... 8 3.3 Conventions ...... 8 3.3.1 Variables ...... 8 3.3.2 Exponents ...... 8 3.4 Algebraic vs. other mathematical expressions ...... 8 3.5 See also ...... 8 3.6 Notes ...... 8 3.7 References ...... 9

i ii CONTENTS

3.8 External links ...... 9

4 Algebraic fraction 10 4.1 Terminology ...... 10 4.2 Rational fractions ...... 10 4.3 Irrational fractions ...... 11 4.4 Notes ...... 11 4.5 References ...... 11

5 Algebraic operation 12 5.1 Notation ...... 12 5.2 Arithmetic vs algebraic operations ...... 13 5.3 Properties of arithmetic and algebraic operations ...... 13 5.4 References ...... 13 5.5 See also ...... 13

6 Associative property 14 6.1 Definition ...... 14 6.2 Generalized associative law ...... 15 6.3 Examples ...... 15 6.4 Propositional logic ...... 18 6.4.1 Rule of replacement ...... 18 6.4.2 Truth functional connectives ...... 18 6.5 Non-associativity ...... 18 6.5.1 Nonassociativity of floating point calculation ...... 19 6.5.2 Notation for non-associative operations ...... 19 6.6 See also ...... 21 6.7 References ...... 21

7 Brahmagupta’s identity 22 7.1 History ...... 22 7.2 Application to Pell’s equation ...... 22 7.3 See also ...... 23 7.4 References ...... 23 7.5 External links ...... 23

8 Brahmagupta– identity 24 8.1 History ...... 24 8.2 Related identities ...... 25 8.3 Relation to complex numbers ...... 25 8.4 Interpretation via norms ...... 25 8.5 Application to Pell’s equation ...... 25 8.6 See also ...... 26 CONTENTS iii

8.7 References ...... 26 8.8 External links ...... 26

9 Carlyle circle 27 9.1 Definition ...... 27 9.2 Defining property ...... 28 9.3 Construction of regular polygons ...... 28 9.3.1 Regular pentagon ...... 28 9.3.2 Regular heptadecagon ...... 29 9.3.3 Regular 257-gon ...... 30 9.3.4 Regular 65537-gon ...... 30 9.4 References ...... 30

10 Change of variables 32 10.1 Simple example ...... 33 10.2 Formal introduction ...... 33 10.3 Other examples ...... 33 10.3.1 Coordinate transformation ...... 33 10.3.2 Differentiation ...... 34 10.3.3 Integration ...... 34 10.3.4 Differential equations ...... 34 10.3.5 Scaling and shifting ...... 34 10.3.6 Momentum vs. velocity ...... 35 10.3.7 Lagrangian mechanics ...... 35 10.4 See also ...... 36

11 Commutative property 37 11.1 Common uses ...... 37 11.2 Mathematical definitions ...... 38 11.3 Examples ...... 38 11.3.1 Commutative operations in everyday life ...... 38 11.3.2 Commutative operations in mathematics ...... 38 11.3.3 Noncommutative operations in everyday life ...... 39 11.3.4 Noncommutative operations in mathematics ...... 40 11.4 History and etymology ...... 40 11.5 Propositional logic ...... 41 11.5.1 Rule of replacement ...... 41 11.5.2 Truth functional connectives ...... 41 11.6 Set theory ...... 41 11.7 Mathematical structures and commutativity ...... 41 11.8 Related properties ...... 42 11.8.1 Associativity ...... 42 iv CONTENTS

11.8.2 Symmetry ...... 42 11.9 Non-commuting operators in quantum mechanics ...... 43 11.10See also ...... 43 11.11Notes ...... 44 11.12References ...... 44 11.12.1 Books ...... 44 11.12.2 Articles ...... 45 11.12.3 Online resources ...... 45

12 Completing the square 46 12.1 Overview ...... 47 12.1.1 Background ...... 47 12.1.2 Basic example ...... 47 12.1.3 General description ...... 48 12.1.4 Non-monic case ...... 48 12.1.5 Formula ...... 48 12.2 Relation to the graph ...... 49 12.3 Solving quadratic equations ...... 50 12.3.1 Irrational and complex roots ...... 50 12.3.2 Non-monic case ...... 51 12.4 Other applications ...... 51 12.4.1 Integration ...... 51 12.4.2 Complex numbers ...... 52 12.4.3 Idempotent matrix ...... 53 12.5 Geometric perspective ...... 53 12.6 A variation on the technique ...... 53 12.6.1 Example: the sum of a positive number and its reciprocal ...... 54 12.6.2 Example: factoring a simple quartic polynomial ...... 54 12.7 References ...... 54 12.8 External links ...... 54

13 Constant term 56 13.1 See also ...... 57

14 Cube root 58 14.1 Formal definition ...... 59 14.1.1 Real numbers ...... 59 14.1.2 Complex numbers ...... 60 14.2 Impossibility of compass-and-straightedge construction ...... 61 14.3 Numerical methods ...... 62 14.4 Appearance in solutions of third and fourth degree equations ...... 62 14.5 History ...... 63 CONTENTS v

14.6 See also ...... 63 14.7 References ...... 63 14.8 External links ...... 63

15 Cubic function 64 15.1 History ...... 65 15.2 Critical points of a cubic function ...... 68 15.3 Roots of a cubic function ...... 68 15.3.1 The nature of the roots ...... 68 15.3.2 General formula for roots ...... 70 15.3.3 Reduction to a depressed cubic ...... 71 15.3.4 Cardano’s method ...... 71 15.3.5 Vieta’s substitution ...... 73 15.3.6 Lagrange’s method ...... 74 15.3.7 Trigonometric (and hyperbolic) method ...... 75 15.3.8 Factorization ...... 76 15.3.9 Geometric interpretation of the roots ...... 77 15.4 Collinearities ...... 78 15.5 Applications ...... 78 15.6 See also ...... 78 15.7 Notes ...... 78 15.8 References ...... 80 15.9 External links ...... 81

16 Difference of two squares 85 16.1 Proof ...... 85 16.2 Geometrical demonstrations ...... 85 16.3 Uses ...... 87 16.3.1 Factorisation of polynomials ...... 87 16.3.2 Complex number case: sum of two squares ...... 87 16.3.3 Rationalising denominators ...... 87 16.3.4 Mental arithmetic ...... 88 16.3.5 Difference of two perfect squares ...... 88 16.4 Generalizations ...... 89 16.4.1 Difference of two nth powers ...... 89 16.5 See also ...... 89 16.6 Notes ...... 90 16.7 References ...... 90 16.8 External links ...... 90

17 Distributive property 91 17.1 Definition ...... 91 vi CONTENTS

17.2 Meaning ...... 91 17.3 Examples ...... 92 17.3.1 Real numbers ...... 92 17.3.2 Matrices ...... 93 17.3.3 Other examples ...... 93 17.4 Propositional logic ...... 93 17.4.1 Rule of replacement ...... 93 17.4.2 Truth functional connectives ...... 94 17.5 Distributivity and rounding ...... 94 17.6 Distributivity in rings ...... 94 17.7 Generalizations of distributivity ...... 95 17.7.1 Notions of antidistributivity ...... 95 17.8 Notes ...... 95 17.9 References ...... 96 17.10External links ...... 96

18 Elementary algebra 97 18.1 Algebraic notation ...... 97 18.1.1 Alternative notation ...... 99 18.2 Concepts ...... 99 18.2.1 Variables ...... 99 18.2.2 Evaluating expressions ...... 99 18.2.3 Equations ...... 100 18.2.4 Substitution ...... 102 18.3 Solving algebraic equations ...... 102 18.3.1 Linear equations with one variable ...... 103 18.3.2 Linear equations with two variables ...... 103 18.3.3 Quadratic equations ...... 105 18.3.4 Exponential and logarithmic equations ...... 107 18.3.5 Radical equations ...... 108 18.3.6 System of linear equations ...... 109 18.3.7 Other types of systems of linear equations ...... 111 18.4 See also ...... 113 18.5 References ...... 114 18.6 External links ...... 115

19 Equating coefficients 116 19.1 Example in real fractions ...... 116 19.2 Example in nested radicals ...... 117 19.3 Example of testing for linear dependence of equations ...... 117 19.4 Example in complex numbers ...... 118 19.5 References ...... 118 CONTENTS vii

20 Equation 119 20.1 Introduction ...... 119 20.1.1 Parameters and unknowns ...... 119 20.1.2 Analogous illustration ...... 120 20.1.3 Identities ...... 121 20.2 Properties ...... 122 20.3 Algebra ...... 123 20.3.1 Polynomial equations ...... 123 20.3.2 Systems of linear equations ...... 123 20.4 Geometry ...... 125 20.4.1 Analytic geometry ...... 125 20.4.2 Cartesian equations ...... 125 20.4.3 Parametric equations ...... 126 20.5 Number theory ...... 127 20.5.1 Diophantine equations ...... 127 20.5.2 Algebraic and transcendental numbers ...... 127 20.5.3 Algebraic geometry ...... 127 20.6 Differential equations ...... 127 20.6.1 Ordinary differential equations ...... 128 20.6.2 Partial differential equations ...... 128 20.7 Types of equations ...... 128 20.8 See also ...... 129 20.9 References ...... 129 20.10External links ...... 130

21 Euler’s four-square identity 131 21.1 See also ...... 132 21.2 References ...... 132 21.3 External links ...... 132

22 Extraneous and missing solutions 133 22.1 Extraneous solutions: multiplication ...... 133 22.2 Extraneous solutions: rational ...... 134 22.3 Missing solutions: division ...... 134 22.4 Other operations ...... 135 22.5 See also ...... 135

23 Factorization 136 23.1 Integers ...... 137 23.2 Polynomials ...... 137 23.2.1 General methods ...... 137 23.2.2 Recognizable patterns ...... 140 viii CONTENTS

23.2.3 Using formulas for polynomial roots ...... 143 23.2.4 Factoring over the complex numbers ...... 144 23.3 Matrices ...... 144 23.4 Unique factorization domains ...... 144 23.4.1 Euclidean domains ...... 144 23.5 See also ...... 144 23.6 Notes ...... 145 23.7 References ...... 145 23.8 External links ...... 145

24 FOIL method 146 24.1 Examples ...... 146 24.2 The distributive law ...... 147 24.3 Reverse FOIL ...... 147 24.4 Table as an alternative to FOIL ...... 147 24.5 Generalizations ...... 148 24.6 See also ...... 148 24.7 Further reading ...... 148

25 Geometry of roots of real polynomials 150 25.1 Complex roots of quadratic polynomials ...... 151 25.2 References ...... 152 25.3 External links ...... 152

26 Identity (mathematics) 153 26.1 Common identities ...... 154 26.1.1 Trigonometric identities ...... 154 26.1.2 Exponential identities ...... 154 26.1.3 Logarithmic identities ...... 154 26.1.4 Hyperbolic function identities ...... 155 26.2 See also ...... 155 26.3 References ...... 155 26.4 External links ...... 156

27 Inequality (mathematics) 157 27.1 Properties ...... 157 27.1.1 Transitivity ...... 158 27.1.2 Converse ...... 158 27.1.3 Addition and subtraction ...... 159 27.1.4 Multiplication and division ...... 159 27.1.5 Additive inverse ...... 160 27.1.6 Multiplicative inverse ...... 160 27.1.7 Applying a function to both sides ...... 161 CONTENTS ix

27.2 Ordered fields ...... 162 27.3 Chained notation ...... 162 27.4 Inequalities between means ...... 162 27.5 Power inequalities ...... 162 27.5.1 Examples ...... 163 27.6 Well-known inequalities ...... 164 27.7 Complex numbers and inequalities ...... 165 27.8 Vector inequalities ...... 165 27.9 General Existence Theorems ...... 165 27.10See also ...... 166 27.11Notes ...... 166 27.12References ...... 166 27.13External links ...... 167

28 Inequation 168 28.1 Chains of inequations ...... 168 28.2 Solving inequations ...... 168 28.3 Special ...... 169 28.4 See also ...... 170 28.5 References ...... 170

29 Proofs involving the addition of natural numbers 171 29.1 Definitions ...... 171 29.2 Proof of associativity ...... 171 29.3 Proof of identity element ...... 171 29.4 Proof of commutativity ...... 172 29.5 See also ...... 172 29.6 References ...... 172 29.7 Text and image sources, contributors, and licenses ...... 173 29.7.1 Text ...... 173 29.7.2 Images ...... 177 29.7.3 Content license ...... 180 Chapter 1

Additive identity

In mathematics the additive identity of a set which is equipped with the operation of addition is an element which, when added to any element x in the set, yields x. One of the most familiar additive identities is the number 0 from elementary mathematics, but additive identities occur in other mathematical structures where addition is defined, such as in groups and rings.

1.1 Elementary examples

• The additive identity familiar from elementary mathematics is zero, denoted 0. For example, 5 + 0 = 5 = 0 + 5 • In the natural numbers N and all of its supersets (the integers Z, the rational numbers Q, the real numbers R, or the complex numbers C), the additive identity is 0. Thus for any one of these numbers n, n + 0 = n = 0 + n

1.2 Formal definition

Let N be a set which is closed under the operation of addition, denoted +. An additive identity for N is any element e such that for any element n in N,

e + n = n = n + e

Example: The formula is n + 0 = n = 0 + n.

1.3 Further examples

• In a group the additive identity is the identity element of the group, is often denoted 0, and is unique (see below for proof). • A ring or field is a group under the operation of addition and thus these also have a unique additive identity 0. This is defined to be different from the multiplicative identity 1 if the ring (or field) has more than one element. If the additive identity and the multiplicative identity are the same, then the ring is trivial (proved below). • In the ring Mm×n(R) of m by n matrices over a ring R, the additive identity is denoted 0 and is the m by n matrix whose entries consist entirely of the identity element 0 in R. For example, in the 2 by 2 matrices over the integers M2(Z) the additive identity is ( ) 0 0 0 = 0 0

1 2 CHAPTER 1. ADDITIVE IDENTITY

• In the quaternions, 0 is the additive identity.

• In the ring of functions from R to R, the function mapping every number to 0 is the additive identity.

• In the additive group of vectors in Rn, the origin or zero vector is the additive identity.

1.4 Proofs

1.4.1 The additive identity is unique in a group

Let (G, +) be a group and let 0 and 0' in G both denote additive identities, so for any g in G,

0 + g = g = g + 0 and 0' + g = g = g + 0'

It follows from the above that

(0') = (0') + 0 = 0' + (0) = (0)

1.4.2 The additive identity annihilates ring elements

In a system with a multiplication operation that distributes over addition, the additive identity is a multiplicative absorbing element, meaning that for any s in S, s·0 = 0. This can be seen because:

s · 0 = s · (0 + 0) = s · 0 + s · 0 ⇒ s · 0 = s · 0 − s · 0 ⇒ s · 0 = 0

1.4.3 The additive and multiplicative identities are different in a non-trivial ring

Let R be a ring and suppose that the additive identity 0 and the multiplicative identity 1 are equal, or 0 = 1. Let r be any element of R. Then

r = r × 1 = r × 0 = 0

proving that R is trivial, that is, R = {0}. The contrapositive, that if R is non-trivial then 0 is not equal to 1, is therefore shown.

1.5 See also

• 0 (number)

• Additive inverse

• Identity element

• Multiplicative identity

1.6 References

• David S. Dummit, Richard M. Foote, Abstract Algebra, Wiley (3d ed.): 2003, ISBN 0-471-43334-9. 1.7. EXTERNAL LINKS 3

1.7 External links

• uniqueness of additive identity in a ring at PlanetMath.org.

• Margherita Barile, “Additive Identity”, MathWorld. Chapter 2

Additive inverse

In mathematics, the additive inverse of a number a is the number that, when added to a, yields zero. This number is also known as the opposite (number),[1] sign change, and negation.[2] For a real number, it reverses its sign: the opposite to a positive number is negative, and the opposite to a negative number is positive. Zero is the additive inverse of itself. The additive inverse of a is denoted by unary minus: −a (see the discussion below). For example, the additive inverse of 7 is −7, because 7 + (−7) = 0, and the additive inverse of −0.3 is 0.3, because −0.3 + 0.3 = 0 . The additive inverse is defined as its inverse element under the binary operation of addition (see the discussion below), which allows a broad generalization to mathematical objects other than numbers. As for any inverse operation, double additive inverse has no effect: −(−x) = x.

2.1 Common examples

For a number and, generally, in any ring, the additive inverse can be calculated using multiplication by −1; that is, −n = −1 × n . Examples of rings of numbers are integers, rational numbers, real numbers, and complex number.

2.1.1 Relation to subtraction

Additive inverse is closely related to subtraction, which can be viewed as an addition of the opposite:

a − b = a + (−b).

Conversely, additive inverse can be thought of as subtraction from zero:

−a = 0 − a.

Hence, unary minus sign notation can be seen as a shorthand for subtraction with “0” symbol omitted, although in a correct typography there should be no space after unary "−".

2.1.2 Other properties

In addition to the identities listed above, negation has the following algebraic properties:

−(a + b) = (−a) + (−b) a − (−b) = a + b (−a) × b = a × (−b) = −(a × b) (−a) × (−b) = a × b notably, (−a)2 = a2

4 2.2. FORMAL DEFINITION 5

+i

−1 0 +1

−i

These complex numbers, two of eight values of 8√1, are mutually opposite

2.2 Formal definition

The notation + is usually reserved for commutative binary operations, i.e. such that x + y = y + x, for all x, y . If such an operation admits an identity element o (such that x + o ( = o + x ) = x for all x), then this element is unique ( o′ = o′ + o = o ). For a given x , if there exists x′ such that x + x′ ( = x′ + x ) = o , then x′ is called an additive inverse of x. If + is associative (( x + y ) + z = x + ( y + z ) for all x, y, z), then an additive inverse is unique

x″ = x″ + o = x″ + (x′ + x) = (x″ + x′) + x = o + x = x

For example, since addition of real numbers is associative, each real number has a unique additive inverse.

2.3 Other examples

All the following examples are in fact abelian groups:

• complex numbers: −(a + bi) = (−a) + (−b)i. On the complex plane, this operation rotates a complex number 180 degrees around the origin (see the image above). 6 CHAPTER 2. ADDITIVE INVERSE

• addition of real- and complex-valued functions: here, the additive inverse of a function f is the function −f defined by (−f )(x) = − f (x) , for all x, such that f + (−f ) = o , the zero function ( o(x) = 0 for all x ). • more generally, what precedes applies to all functions with values in an abelian group ('zero' meaning then the identity element of this group): • sequences, matrices and nets are also special kinds of functions.

• In a vector space the additive inverse −v is often called the opposite vector of v; it has the same magnitude as the original and opposite direction. Additive inversion corresponds to scalar multiplication by −1. For Euclidean space, it is point reflection in the origin. Vectors in exactly opposite directions (multiplied to negative numbers) are sometimes referred to as antiparallel.

• vector space-valued functions (not necessarily linear), • In modular arithmetic, the modular additive inverse of x is also defined: it is the number a such that a + x ≡ 0 (mod n). This additive inverse always exists. For example, the inverse of 3 modulo 11 is 8 because it is the solution to 3 + x ≡ 0 (mod 11).

2.4 Non-examples

Natural numbers, cardinal numbers, and ordinal numbers, do not have additive inverses within their respective sets. Thus, for example, we can say that natural numbers do have additive inverses, but because these additive inverses are not themselves natural numbers, the set of natural numbers is not closed under taking additive inverses.

2.5 See also

• Absolute value (related through the identity | −x | = | x | ) • Multiplicative inverse

• Additive identity • Involution (mathematics)

• Reflection symmetry

2.6 Footnotes

[1] Tussy, Alan; Gustafson, R. (2012), Elementary Algebra (5th ed.), Cengage Learning, p. 40, ISBN 9781133710790.

[2] The term “negation” bears a reference to negative numbers, which can be misleading, because the additive inverse of a negative number is positive.

2.7 References

• Margherita Barile, “Additive Inverse”, MathWorld. Chapter 3

Algebraic expression

“Rational expression” redirects here. For the notion in formal languages, see regular expression.

In mathematics, an algebraic expression is an expression built up from integer constants, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by an exponent that is a rational num- ber).[1] For example, 3x2 − 2xy + c is an algebraic expression. Since taking the square root is the same as raising to 1 the power 2 ,

√ 1 − x2 1 + x2

is also an algebraic expression. By contrast, transcendental numbers like π and e are not algebraic. A rational expression is an expression that may be rewritten to a rational fraction by using the properties of the arithmetic operations (commutative properties and associative properties of addition and multiplication, distributive property and rules for the operations on the fractions). In other words, a rational expression is an expression which may 2 be constructed from the variables and the constants by using only the four operations of arithmetic. Thus, 3x −2xy+c √ y3−1 1−x2 is a rational expression, whereas 1+x2 is not. P (x) A rational equation is an equation in which two rational fractions (or rational expressions) of the form Q(x) are set equal to each other. These expressions obey the same rules as fractions. The equations can be solved by cross- multiplying. Division by zero is undefined, so that a solution causing formal division by zero is rejected.

3.1 Terminology

Algebra has its own terminology to describe parts of an expression:

1 – Exponent (power), 2 – coefficient, 3 – term, 4 – operator, 5 – constant, x, y - variables

7 8 CHAPTER 3. ALGEBRAIC EXPRESSION

3.2 In roots of polynomials

The roots of a polynomial expression of degree n, or equivalently the solutions of a polynomial equation, can always be written as algebraic expressions if n < 5 (see quadratic formula, cubic function, and quartic equation). Such a solution of an equation is called an algebraic solution. But the Abel-Ruffini theorem states that algebraic solutions do not exist for all such equations (just for some of them) if n ≥ 5.

3.3 Conventions

3.3.1 Variables

By convention, letters at the beginning of the alphabet (e.g. a, b, c ) are typically used to represent constants, and those toward the end of the alphabet (e.g. x, y and z ) are used to represent variables.[2] They are usually written in italics.[3]

3.3.2 Exponents

By convention, terms with the highest power (exponent), are written on the left, for example, x2 is written to the left of x . When a coefficient is one, it is usually omitted (e.g. 1x2 is written x2 ).[4] Likewise when the exponent (power) is one, (e.g. 3x1 is written 3x ),[5] and, when the exponent is zero, the result is always 1 (e.g. 3x0 is written 3 , since x0 is always 1 ).[6]

3.4 Algebraic vs. other mathematical expressions

The table below summarizes how algebraic expressions compare with several other types of mathematical expressions. A rational algebraic expression (or rational expression) is an algebraic expression that can be written as a quotient of polynomials, such as x2 + 4x + 4. An irrational algebraic expression is one that is not rational, such as √x + 4.

3.5 See also

• Algebraic equation

• Linear_equation#Algebraic_equations

• Algebraic function

• Analytical expression

• Arithmetic expression

• Closed-form expression

• Expression (mathematics)

• Polynomial

• Term (logic)

3.6 Notes

[1] Morris, Christopher G. (1992). Academic Press dictionary of science and technology. p. 74.

[2] William L. Hosch (editor), The Britannica Guide to Algebra and Trigonometry, Britannica Educational Publishing, The Rosen Publishing Group, 2010, ISBN 1615302190, 9781615302192, page 71 3.7. REFERENCES 9

[3] James E. Gentle, Numerical Linear Algebra for Applications in Statistics, Publisher: Springer, 1998, ISBN 0387985425, 9780387985428, 221 pages, [James E. Gentle page 183]

[4] David Alan Herzog, Teach Yourself Visually Algebra, Publisher John Wiley & Sons, 2008, ISBN 0470185597, 9780470185599, 304 pages, page 72

[5] John C. Peterson, Technical Mathematics With Calculus, Publisher Cengage Learning, 2003, ISBN 0766861899, 9780766861893, 1613 pages, page 31

[6] Jerome E. Kaufmann, Karen L. Schwitters, Algebra for College Students, Publisher Cengage Learning, 2010, ISBN 0538733543, 9780538733540, 803 pages, page 222

3.7 References

• James, Robert Clarke; James, Glenn (1992). Mathematics dictionary. p. 8.

3.8 External links

• Weisstein, Eric W., “Algebraic Expression”, MathWorld. Chapter 4

Algebraic fraction

In algebra, an algebraic fraction is a fraction whose√ numerator and denominator are algebraic expressions. Two 3x x+2 examples of algebraic fractions are x2+2x−3 and x2−3 . Algebraic fractions are subject to the same laws as arithmetic fractions. A rational fraction is an algebraic fraction whose numerator and denominator are both polynomials. Thus 3x √ x2+2x−3 x+2 is a rational fraction, but not x2−3 , because the numerator contains a square root function.

4.1 Terminology

a In the algebraic fraction b , the dividend a is called the numerator and the divisor b is called the denominator. The numerator and denominator are called the terms of the algebraic fraction. A complex fraction is a fraction whose numerator or denominator, or both, contains a fraction. A simple fraction contains no fraction either in its numerator or its denominator. A fraction is in lowest terms if the only factor common to the numerator and the denominator is 1. An expression which is not in fractional form is an integral expression. An integral expression can always be written in fractional form by giving it the denominator 1. A mixed expression is the algebraic sum of one or more integral expressions and one or more fractional terms.

4.2 Rational fractions

See also: Rational function

If the expressions a and b are polynomials, the algebraic fraction is called a rational algebraic fraction[1] or simply [2][3] f(x) rational fraction. Rational fractions are also known as rational expressions. A rational fraction g(x) is called 2x proper if deg f(x) < deg g(x) , and improper otherwise. For example, the rational fraction x2−1 is proper, and the x3+x2+1 x2−x+1 rational fractions x2−5x+6 and 5x2+3 are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has

x3 + x2 + 1 24x − 35 = (x + 6) + , x2 − 5x + 6 x2 − 5x + 6 where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example,

2x 1 1 = + . x2 − 1 x − 1 x + 1

10 4.3. IRRATIONAL FRACTIONS 11

Here, the two terms on the right are called partial fractions.

4.3 Irrational fractions

An irrational fraction is one that contains the variable under a fractional exponent.[4] An example of an irrational fraction is

1 2 − 1 x 3 a 1 1 . x 3 − x 2 The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute x = z6 to obtain z3 − 1 a 3 . z2 − z3

4.4 Notes

[1] Bansi Lal (2006). Topics in Integral Calculus. p. 53.

[2] Ėrnest Borisovich Vinberg (2003). A course in algebra. p. 131.

[3] Parmanand Gupta. Comprehensive Mathematics XII. p. 739.

[4] Washington McCartney (1844). The principles of the differential and integral calculus; and their application to geometry. p. 203.

4.5 References

Brink, Raymond W. (1951). “IV. Fractions”. College Algebra. Chapter 5

Algebraic operation

Algebraic operations in the solution to the quadratic equation. The radical sign, √ denoting a square root, is equivalent to exponentiation to the power of ½. The ± sign represents the equation written with either a + and with a - sign.

In mathematics, an algebraic operation is any one of the operations addition, subtraction, multiplication, division, raising to an integer power, and taking roots (fractional power). Algebraic operations are performed on an algebraic variable, term or expression,[1] and work in the same way as arithmetic operations.[2]

5.1 Notation

Multiplication symbols are usually omitted, and implied when there is no operator between two variables or terms, or when a coefficient is used. For example, 3 × x2 is written as 3x2, and 2 × x × y is written as 2xy.[3] Sometimes multiplication symbols are replaced with either a dot, or center-dot, so that x × y is written as either x . y or x · y. Plain text, programming languages, and calculators also use a single asterisk to represent the multiplication symbol,[4] and it must be explicitly used, for example, 3x is written as 3 * x. Rather than using the obelus symbol, ÷, division is usual represented with a vinculum, a horizontal line, e.g. 3/x + 1. In plain text and programming languages a slash (also called a solidus) is used, e.g. 3 / (x + 1). Exponents are usually formatted using superscripts, e.g. x2. In plain text, and in the TeX mark-up language, the caret symbol, ^, represents exponents, so x2 is written as x ^ 2.[5][6] In programming languages such as Ada,[7] Fortran,[8] Perl,[9] Python[10] and Ruby,[11] a double asterisk is used, so x2 is written as x ** 2. The plus-minus sign, ±, is used as a shorthand notation for two expressions written as one, representing one expression with a plus sign, the other with a minus sign. For example y = x ± 1 represents the two equations y = x + 1 and y = x − 1. Sometimes it is used for denoting positive-or-negative term such as ±x.

12 5.2. ARITHMETIC VS ALGEBRAIC OPERATIONS 13

5.2 Arithmetic vs algebraic operations

Algebraic operations work in the same way as arithmetic operations, as can be seen in the table below. Note: the use of the letters a and b is arbitrary, and the examples would be equally valid if we had used x and y .

5.3 Properties of arithmetic and algebraic operations

5.4 References

[1] William Smyth, Elementary algebra: for schools and academies, Publisher Bailey and Noyes, 1864, "Algebraic Operations"

[2] Horatio Nelson Robinson, New elementary algebra: containing the rudiments of science for schools and academies, Ivison, Phinney, Blakeman, & Co., 1866, page 7

[3] Sin Kwai Meng, Chip Wai Lung, Ng Song Beng, “Algebraic notation”, in Mathematics Matters Secondary 1 Express Text- book, Publisher Panpac Education Pte Ltd, ISBN 9812738827, 9789812738820, page 68

[4] William P. Berlinghoff, Fernando Q. Gouvêa, Math through the Ages: A Gentle History for Teachers and Others, Publisher MAA, 2004, ISBN 0883857367, 9780883857366, page 75

[5] Ramesh Bangia, Dictionary of Information Technology, Publisher Laxmi Publications, Ltd., 2010, ISBN 9380298153, 9789380298153, page 212

[6] George Grätzer, First Steps in LaTeX, Publisher Springer, 1999, ISBN 0817641327, 9780817641320, page 17

[7] S. Tucker Taft, Robert A. Duff, Randall L. Brukardt, Erhard Ploedereder, Pascal Leroy, Ada 2005 Reference Manual, Volume 4348 of Lecture Notes in Computer Science, Publisher Springer, 2007, ISBN 3540693351, 9783540693352, page 13

[8] C. Xavier, Fortran 77 And Numerical Methods, Publisher New Age International, 1994, ISBN 812240670X, 9788122406702, page 20

[9] Randal Schwartz, brian foy, Tom Phoenix, Learning Perl, Publisher O'Reilly Media, Inc., 2011, ISBN 1449313140, 9781449313142, page 24

[10] Matthew A. Telles, Python Power!: The Comprehensive Guide, Publisher Course Technology PTR, 2008, ISBN 1598631586, 9781598631586, page 46

[11] Kevin C. Baird, Ruby by Example: Concepts and Code, Publisher No Starch Press, 2007, ISBN 1593271484, 9781593271480, page 72

[12] Ron Larson, Robert Hostetler, Bruce H. Edwards, Algebra And Trigonometry: A Graphing Approach, Publisher: Cengage Learning, 2007, ISBN 061885195X, 9780618851959, 1114 pages, page 7

5.5 See also

• Elementary algebra • Order of operations Chapter 6

Associative property

This article is about associativity in mathematics. For associativity in the central processing unit memory cache, see CPU cache. For associativity in programming languages, see operator associativity. “Associative” and “non-associative” redirect here. For associative and non-associative learning, see Learning#Types.

In mathematics, the associative property[1] is a property of some binary operations. In propositional logic, associa- tivity is a valid rule of replacement for expressions in logical proofs. Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is, rearranging the parentheses in such an expression will not change its value. Consider the following equations:

(2 + 3) + 4 = 2 + (3 + 4) = 9

2 × (3 × 4) = (2 × 3) × 4 = 24. Even though the parentheses were rearranged, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that “addition and multiplication of real numbers are associative operations”. Associativity is not to be confused with commutativity, which addresses whether a × b = b × a. Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative. However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation and the vector cross product. In contrast to the theoretical counterpart, the addition of floating point numbers in com- puter science is not associative, and is an important source of rounding error.

6.1 Definition

Formally, a binary operation ∗ on a set S is called associative if it satisfies the associative law:

(x ∗ y) ∗ z = x ∗ (y ∗ z) for all x, y, z in S.

Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol like for the multiplication.

(xy)z = x(yz) = xyz for all x, y, z in S.

The associative law can also be expressed in functional notation thus: f(f(x, y), z) = f(x, f(y, z)).

14 6.2. GENERALIZED ASSOCIATIVE LAW 15

A binary operation ∗ on the set S is associative when this diagram commutes. That is, when the two paths from S×S×S to S compose to the same function from S×S×S to S.

6.2 Generalized associative law

If a binary operation is associative, repeated application of the operation produces the same result regardless how valid pairs of parenthesis are inserted in the expression.[2] This is called the generalized associative law. For instance, a product of four elements may be written in five possible ways:

1. ((ab)c)d

2. (ab)(cd)

3. (a(bc))d

4. a((bc)d)

5. a(b(cd))

If the product operation is associative, the generalized associative law says that all these formulas will yield the same result, making the parenthesis unnecessary. Thus “the” product can be written unambiguously as

abcd.

As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation.

6.3 Examples

Some examples of associative operations include the following.

• The concatenation of the three strings “hello”, " ", “world” can be computed by concatenating the first two strings (giving “hello ") and appending the third string (“world”), or by joining the second and third string (giving " world”) and concatenating the first string (“hello”) with the result. The two methods produce the same result; string concatenation is associative (but not commutative).

• In arithmetic, addition and multiplication of real numbers are associative; i.e., 16 CHAPTER 6. ASSOCIATIVE PROPERTY

a(b(c(de)))

a(b((cd)e)) (ab)(c(de))

a((bc)(de)) a((b(cd))e)

a(((bc)d)e) (ab)((cd)e)

(a(bc))(de) (a(b(cd)))e

(a((bc)d))e ((ab)(cd))e

((ab)c)(de) ((a(bc))d)e

(((ab)c)d)e

In the absence of the associative property, five factors a, b, c, d, e result in a Tamari lattice of order four, possibly different products.

} (x + y) + z = x + (y + z) = x + y + z for all x, y, z ∈ R. (x y)z = x(y z) = x y z Because of associativity, the grouping parentheses can be omitted without ambiguity. 6.3. EXAMPLES 17

(x + y) + z = x + (y + z)

The addition of real numbers is associative.

• Addition and multiplication of complex numbers and quaternions are associative. Addition of octonions is also associative, but multiplication of octonions is non-associative.

• The greatest common divisor and least common multiple functions act associatively.

} gcd(gcd(x, y), z) = gcd(x, gcd(y, z)) = gcd(x, y, z) for all x, y, z ∈ Z. lcm(lcm(x, y), z) = lcm(x, lcm(y, z)) = lcm(x, y, z)

• Taking the intersection or the union of sets:

} (A ∩ B) ∩ C = A ∩ (B ∩ C) = A ∩ B ∩ C for all sets A, B, C. (A ∪ B) ∪ C = A ∪ (B ∪ C) = A ∪ B ∪ C

• If M is some set and S denotes the set of all functions from M to M, then the operation of functional composition on S is associative:

(f ◦ g) ◦ h = f ◦ (g ◦ h) = f ◦ g ◦ h for all f, g, h ∈ S.

• Slightly more generally, given four sets M, N, P and Q, with h: M to N, g: N to P, and f: P to Q, then

(f ◦ g) ◦ h = f ◦ (g ◦ h) = f ◦ g ◦ h

as before. In short, composition of maps is always associative.

• Consider a set with three elements, A, B, and C. The following operation:

is associative. Thus, for example, A(BC)=(AB)C = A. This operation is not commutative.

• Because matrices represent linear transformation functions, with matrix multiplication representing functional composition, one can immediately conclude that matrix multiplication is associative. 18 CHAPTER 6. ASSOCIATIVE PROPERTY

6.4 Propositional logic

6.4.1 Rule of replacement

In standard truth-functional propositional logic, association,[3][4] or associativity[5] are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules are:

(P ∨ (Q ∨ R)) ⇔ ((P ∨ Q) ∨ R) and

(P ∧ (Q ∧ R)) ⇔ ((P ∧ Q) ∧ R), where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

6.4.2 Truth functional connectives

Associativity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following are truth-functional tautologies. Associativity of disjunction:

(P ∨ (Q ∨ R)) ↔ ((P ∨ Q) ∨ R)

((P ∨ Q) ∨ R) ↔ (P ∨ (Q ∨ R)) Associativity of conjunction:

((P ∧ Q) ∧ R) ↔ (P ∧ (Q ∧ R))

(P ∧ (Q ∧ R)) ↔ ((P ∧ Q) ∧ R) Associativity of equivalence:

((P ↔ Q) ↔ R) ↔ (P ↔ (Q ↔ R))

(P ↔ (Q ↔ R)) ↔ ((P ↔ Q) ↔ R)

6.5 Non-associativity

A binary operation ∗ on a set S that does not satisfy the associative law is called non-associative. Symbolically,

(x ∗ y) ∗ z ≠ x ∗ (y ∗ z) for some x, y, z ∈ S.

For such an operation the order of evaluation does matter. For example:

• Subtraction

(5 − 3) − 2 ≠ 5 − (3 − 2) 6.5. NON-ASSOCIATIVITY 19

• Division

(4/2)/2 ≠ 4/(2/2)

• Exponentiation

2 2(1 ) ≠ (21)2 Also note that infinite sums are not generally associative, for example:

(1 − 1) + (1 − 1) + (1 − 1) + (1 − 1) + (1 − 1) + (1 − 1) + ... = 0

whereas

1 + (−1 + 1) + (−1 + 1) + (−1 + 1) + (−1 + 1) + (−1 + 1) + (−1 + ... = 1

The study of non-associative structures arises from reasons somewhat different from the mainstream of classical algebra. One area within non-associative algebra that has grown very large is that of Lie algebras. There the associative law is replaced by the Jacobi identity. Lie algebras abstract the essential nature of infinitesimal transformations, and have become ubiquitous in mathematics. There are other specific types of non-associative structures that have been studied in depth; these tend to come from some specific applications or areas such as combinatorial mathematics. Other examples are Quasigroup, Quasifield, Non-associative ring, Non-associative algebra and Commutative non-associative magmas.

6.5.1 Nonassociativity of floating point calculation

In mathematics, addition and multiplication of real numbers is associative. By contrast, in computer science, the addition and multiplication of floating point numbers is not associative, as rounding errors are introduced when dissimilar-sized values are joined together.[6] To illustrate this, consider a floating point representation with a 4-bit mantissa: 0 0 4 1 4 4 (1.0002×2 + 1.0002×2 ) + 1.0002×2 = 1.0002×2 + 1.0002×2 = 1.0012×2 0 0 4 0 4 4 1.0002×2 + (1.0002×2 + 1.0002×2 ) = 1.0002×2 + 1.0002×2 = 1.0002×2 Even though most computers compute with a 24 or 53 bits of mantissa,[7] this is an important source of rounding error, and approaches such as the Kahan Summation Algorithm are ways to minimise the errors. It can be especially problematic in parallel computing.[8] [9]

6.5.2 Notation for non-associative operations

Main article: Operator associativity

In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression. However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses. A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,

 x ∗ y ∗ z = (x ∗ y) ∗ z  ∗ ∗ ∗ ∗ ∗ ∗ ∈ w x y z = ((w x) y) z  for all w, x, y, z S etc.

while a right-associative operation is conventionally evaluated from right to left: 20 CHAPTER 6. ASSOCIATIVE PROPERTY

 x ∗ y ∗ z = x ∗ (y ∗ z)  ∗ ∗ ∗ ∗ ∗ ∗ ∈ w x y z = w (x (y z))  for all w, x, y, z S etc. Both left-associative and right-associative operations occur. Left-associative operations include the following:

• Subtraction and division of real numbers:

x − y − z = (x − y) − z for all x, y, z ∈ R; x/y/z = (x/y)/z for all x, y, z ∈ R with y ≠ 0, z ≠ 0.

• Function application:

(f x y) = ((f x) y) This notation can be motivated by the currying isomorphism.

Right-associative operations include the following:

• Exponentiation of real numbers:

z z xy = x(y ).

The reason exponentiation is right-associative is that a repeated left-associative exponentiation operation would be less useful. Multiple appearances could (and would) be rewritten with multiplication:

(xy)z = x(yz).

• Function definition

Z → Z → Z = Z → (Z → Z) x 7→ y 7→ x − y = x 7→ (y 7→ x − y)

Using right-associative notation for these operations can be motivated by the Curry-Howard correspon- dence and by the currying isomorphism.

Non-associative operations for which no conventional evaluation order is defined include the following.

• Taking the Cross product of three vectors:

⃗a × (⃗b × ⃗c) ≠ (⃗a ×⃗b) × ⃗c for some ⃗a,⃗b,⃗c ∈ R3

• Taking the pairwise average of real numbers:

(x + y)/2 + z x + (y + z)/2 ≠ for all x, y, z ∈ R with x ≠ z. 2 2 • Taking the relative complement of sets (A\B)\C is not the same as A\(B\C) . (Compare material nonim- plication in logic.) 6.6. SEE ALSO 21

6.6 See also

• Light’s associativity test

• A semigroup is a set with a closed associative binary operation. • Commutativity and distributivity are two other frequently discussed properties of binary operations.

• Power associativity, alternativity and N-ary associativity are weak forms of associativity.

6.7 References

[1] Thomas W. Hungerford (1974). Algebra (1st ed.). Springer. p. 24. ISBN 0387905189. Definition 1.1 (i) a(bc) = (ab)c for all a, b, c in G.

[2] Durbin, John R. (1992). Modern Algebra: an Introduction (3rd ed.). New York: Wiley. p. 78. ISBN 0-471-51001-7. If a1, a2, . . . , an (n ≥ 2) are elements of a set with an associative operation, then the product a1a2 . . . an is unambiguous; this is, the same element will be obtained regardless of how parentheses are inserted in the product

[3] Moore and Parker

[4] Copi and Cohen

[5] Hurley

[6] Knuth, Donald, The Art of Computer Programming, Volume 3, section 4.2.2

[7] IEEE Computer Society (August 29, 2008). “IEEE Standard for Floating-Point Arithmetic”. IEEE. doi:10.1109/IEEESTD.2008.4610935. ISBN 978-0-7381-5753-5. IEEE Std 754-2008.

[8] Villa, Oreste; Chavarría-mir, Daniel; Gurumoorthi, Vidhya; Márquez, Andrés; Krishnamoorthy, Sriram, Effects of Floating- Point non-Associativity on Numerical Computations on Massively Multithreaded Systems (PDF), retrieved 2014-04-08

[9] Goldberg, David, “What Every Computer Scientist Should Know About Floating Point Arithmetic” (PDF), ACM Computing Surveys 23 (1): 5–48, doi:10.1145/103162.103163, retrieved 2014-04-08 Chapter 7

Brahmagupta’s identity

In algebra, Brahmagupta’s identity says that the product of two numbers of the form a2 + nb2 is itself a number of that form. In other words, the set of such numbers is closed under multiplication. Specifically:

( )( ) a2 + nb2 c2 + nd2 = (ac − nbd)2 + n (ad + bc)2 (1) = (ac + nbd)2 + n (ad − bc)2 , (2)

Both (1) and (2) can be verified by expanding each side of the equation. Also, (2) can be obtained from (1), or (1) from (2), by changing b to −b. This identity holds in both the ring of integers and the ring of rational numbers, and more generally in any commutative ring.

7.1 History

The identity is a generalization of the so-called Fibonacci identity (where n=1) which is actually found in Diophantus' Arithmetica (III, 19). That identity was rediscovered by Brahmagupta (598–668), an Indian mathematician and astronomer, who generalized it and used it in his study of what is now called Pell’s equation. His Brahmasphutasiddhanta was translated from Sanskrit into Arabic by Mohammad al-Fazari, and was subsequently translated into Latin in 1126.[1] The identity later appeared in Fibonacci's Book of Squares in 1225.

7.2 Application to Pell’s equation

In its original context, Brahmagupta applied his discovery to the solution of what was later called Pell’s equation, namely x2 − Ny2 = 1. Using the identity in the form

2 − 2 2 − 2 2 − 2 (x1 Ny1)(x2 Ny2) = (x1x2 + Ny1y2) N(x1y2 + x2y1) ,

2 2 he was able to “compose” triples (x1, y1, k1) and (x2, y2, k2) that were solutions of x − Ny = k, to generate the new triple

(x1x2 + Ny1y2 , x1y2 + x2y1 , k1k2).

Not only did this give a way to generate infinitely many solutions to x2 − Ny2 = 1 starting with one solution, but also, by dividing such a composition by k1k2, integer or “nearly integer” solutions could often be obtained. The general method for solving the Pell equation given by Bhaskara II in 1150, namely the chakravala (cyclic) method, was also based on this identity.[2]

22 7.3. SEE ALSO 23

7.3 See also

• Brahmagupta matrix

• Brahmagupta–Fibonacci identity • Indian mathematics

• List of Indian mathematicians

7.4 References

[1] George G. Joseph (2000). The Crest of the Peacock, p. 306. Princeton University Press. ISBN 0-691-00659-8.

[2] John Stillwell (2002), Mathematics and its history (2 ed.), Springer, pp. 72–76, ISBN 978-0-387-95336-6

7.5 External links

• Brahmagupta’s identity at PlanetMath • Brahmagupta Identity on MathWorld

• A Collection of Algebraic Identities Chapter 8

Brahmagupta–Fibonacci identity

In algebra, the Brahmagupta–Fibonacci identity or simply Fibonacci’s identity (and in fact due to Diophantus of Alexandria) says that the product of two sums each of two squares is itself a sum of two squares. In other words, the set of all sums of two squares is closed under multiplication. Specifically:

( )( ) a2 + b2 c2 + d2 = (ac − bd)2 + (ad + bc)2 (1) = (ac + bd)2 + (ad − bc)2 . (2)

For example,

(12 + 42)(22 + 72) = 262 + 152 = 302 + 12.

The identity is a special case (n = 2) of Lagrange’s identity, and is first found in Diophantus. Brahmagupta proved and used a more general identity (the Brahmagupta identity), equivalent to

( )( ) a2 + nb2 c2 + nd2 = (ac − nbd)2 + n (ad + bc)2 (3) = (ac + nbd)2 + n (ad − bc)2 , (4) showing that the set of all numbers of the form x2 + y2 is closed under multiplication. Both (1) and (2) can be verified by expanding each side of the equation. Also, (2) can be obtained from (1), or (1) from (2), by changing b to −b. This identity holds in both the ring of integers and the ring of rational numbers, and more generally in any commutative ring. In the integer case this identity finds applications in number theory for example when used in conjunction with one of Fermat’s theorems it proves that the product of a square and any number of primes of the form 4n + 1 is also a sum of two squares.

8.1 History

The identity is actually first found in Diophantus' Arithmetica (III, 19), of the third century A.D. It was rediscovered by Brahmagupta (598–668), an Indian mathematician and astronomer, who generalized it (to the Brahmagupta identity) and used it in his study of what is now called Pell’s equation. His Brahmasphutasiddhanta was translated from Sanskrit into Arabic by Mohammad al-Fazari, and was subsequently translated into Latin in 1126.[1] The identity later appeared in Fibonacci's Book of Squares in 1225.

24 8.2. RELATED IDENTITIES 25

8.2 Related identities

Analogous identities are Euler’s four-square related to quaternions, and Degen’s eight-square derived from the octonions which has connections to Bott periodicity. There is also Pfister’s sixteen-square identity, though it is no longer bilinear.

8.3 Relation to complex numbers

If a, b, c, and d are real numbers, this identity is equivalent to the multiplication property for absolute values of complex numbers namely that:

|a + bi||c + di| = |(a + bi)(c + di)|

since

|a + bi||c + di| = |(ac − bd) + i(ad + bc)|,

by squaring both sides

|a + bi|2|c + di|2 = |(ac − bd) + i(ad + bc)|2,

and by the definition of absolute value,

(a2 + b2)(c2 + d2) = (ac − bd)2 + (ad + bc)2.

8.4 Interpretation via norms

In the case that the variables a, b, c, and d are rational numbers, the identity may be interpreted as the statement that the norm in the field Q(i) is multiplicative. That is, we have

N(a + bi) = a2 + b2 and N(c + di) = c2 + d2,

and also

N((a + bi)(c + di)) = N((ac − bd) + i(ad + bc)) = (ac − bd)2 + (ad + bc)2.

Therefore the identity is saying that

N((a + bi)(c + di)) = N(a + bi) · N(c + di).

8.5 Application to Pell’s equation

In its original context, Brahmagupta applied his discovery (the Brahmagupta identity) to the solution of Pell’s equation, namely x2 − Ny2 = 1. Using the identity in the more general form

2 − 2 2 − 2 2 − 2 (x1 Ny1)(x2 Ny2) = (x1x2 + Ny1y2) N(x1y2 + x2y1) , 26 CHAPTER 8. BRAHMAGUPTA–FIBONACCI IDENTITY

2 2 he was able to “compose” triples (x1, y1, k1) and (x2, y2, k2) that were solutions of x − Ny = k, to generate the new triple

(x1x2 + Ny1y2 , x1y2 + x2y1 , k1k2).

Not only did this give a way to generate infinitely many solutions to x2 − Ny2 = 1 starting with one solution, but also, by dividing such a composition by k1k2, integer or “nearly integer” solutions could often be obtained. The general method for solving the Pell equation given by Bhaskara II in 1150, namely the chakravala (cyclic) method, was also based on this identity.[2]

8.6 See also

• Brahmagupta matrix

• Indian mathematics • List of Indian mathematicians

• Euler’s four-square identity

8.7 References

[1] George G. Joseph (2000). The Crest of the Peacock, p. 306. Princeton University Press. ISBN 0-691-00659-8.

[2] John Stillwell (2002), Mathematics and its history (2 ed.), Springer, pp. 72–76, ISBN 978-0-387-95336-6

8.8 External links

• Brahmagupta’s identity at PlanetMath

• Brahmagupta Identity on MathWorld • A Collection of Algebraic Identities Chapter 9

Carlyle circle

In mathematics, a Carlyle circle is a certain circle in a coordinate plane associated with a quadratic equation. The circle has the property that the solutions of the quadratic equation are the horizontal coordinates of the intersections of the circle with the horizontal axis.[1] The idea of using such a circle to solve a quadratic equation is attributed to Thomas Carlyle (1795–1881).[2] Carlyle circles have been used to develop ruler-and-compass constructions of regular polygons.

9.1 Definition

Carlyle circle of the quadratic equation x2 − sx + p = 0.

Given the quadratic equation

x2 − sx + p = 0

27 28 CHAPTER 9. CARLYLE CIRCLE

the circle in the coordinate plane having the line segment joining the points A(0, 1) and B(s, p) as a diameter is called the Carlyle circle of the quadratic equation.

9.2 Defining property

The defining property of the Carlyle circle can be established thus: the equation of the circle having the line segment AB as diameter is

x(x − s) + (y − 1)(y − p) = 0.

The abscissas of the points where the circle intersects the x-axis are the roots of the equation (obtained by setting y = 0 in the equation of the circle)

x2 − sx + p = 0.

9.3 Construction of regular polygons

9.3.1 Regular pentagon

The problem of constructing a regular pentagon is equivalent to the problem of constructing the roots of the equation

z5 − 1 = 0.

One root of this equation is z0 = 1 which corresponds to the point P0(1, 0). Removing the factor corresponding to this root, the other roots turn out to be roots of the equation

z4 + z3 + z2 + z + 1 = 0.

These roots can be represented in the form ω, ω2, ω3, ω4 where ω = exp(2πi/5). Let these correspond to the points P1, P2, P3, P4. Letting

4 2 3 p1 = ω + ω , p2 = ω + ω

we have

p1 + p2 = −1, p1p2 = −1. (These can be quickly shown to be true by direct substitution into the quartic above and noting that ω6 = ω, and ω7 = ω2.)

So p1 and p2 are the roots of the quadratic equation

x2 + x − 1 = 0.

The Carlyle circle associated with this quadratic has a diameter with endpoints at (0, 1) and (−1, −1) and center at (−1/2, 0). Carlyle circles are used to construct p1 and p2. From the definitions of p1 and p2 it also follows that

p1 = 2 cos (2π/5), p2 = 2 cos (4π/5).

These are then used to construct the points P1, P2, P3, P4. This detailed procedure involving Carlyle circles for the construction of regular pentagons is given below.[2]

1. Draw a circle in which to inscribe the pentagon and mark the center point O. 9.3. CONSTRUCTION OF REGULAR POLYGONS 29

Construction of regular pentagon using Carlyle circles

2. Draw a horizontal line through the center of the circle. Mark one intersection with the circle as point B.

3. Construct a vertical line through the center. Mark one intersection with the circle as point A.

4. Construct the point M as the midpoint of O and B.

5. Draw a circle centered at M through the point A. Mark its intersection with the horizontal line (inside the original circle) as the point W and its intersection outside the circle as the point V.

6. Draw a circle of radius OA and center W. It intersects the original circle at two of the vertices of the pentagon.

7. Draw a circle of radius OA and center V. It intersects the original circle at two of the vertices of the pentagon.

8. The fifth vertex is the intersection of the horizontal axis with the original circle.

9.3.2 Regular heptadecagon

There is a similar method involving Carlyle circles to construct regular heptadecagons.[2] The attached figure illustrates the procedure. 30 CHAPTER 9. CARLYLE CIRCLE

Construction of a regular heptadecagon using Carlyle circles

9.3.3 Regular 257-gon

To construct a regular 257-gon using Carlyle circles, as many as 24 Carlyle circles are to be constructed. One of these is the circle to solve the quadratic equation x2 + x − 64 = 0.[2]

9.3.4 Regular 65537-gon

There is a procedure involving Carlyle circles for the construction of a regular 65537-gon. However there are practical problems for the implementation of the procedure, as, for example, it requires the construction of the Carlyle circle for the solution of the quadratic equation x2 + x − 214 = 0.[2]

9.4 References

[1] Weisstein, Eric W. “Carlyle Circle”. From MathWorld—A Wolfram Web Resource. Retrieved 21 May 2013.

[2] DeTemple, Duane W. (Feb 1991). “Carlyle circles and Lemoine simplicity of polygon constructions” (PDF). The American Mathematical Monthly 98 (2): 97–208. doi:10.2307/2323939. Retrieved 6 November 2011. 9.4. REFERENCES 31

Construction of a regular 257-gon using Carlyle circles Chapter 10

Change of variables

“Substitution (algebra)" redirects here. It is not to be confused with substitution (logic).

In mathematics, the operation of substitution consists in replacing all the occurrences of a free variable appearing in an expression or a formula by a number or another expression. In other words, an expression involving free variables may be considered as defining a function, and substituting values to the variables in the expression is equivalent to applying the function defined by the expression to these values. A change of variables is commonly a particular type of substitution, where the substituted values are expressions that depend on other variables. This is a standard technique used to reduce a difficult problem to a simpler one. A change of coordinates is a common type of change of variables. However, if the expression in which the variables are changed involves derivatives or integrals, the change of variable does not reduce to a substitution. A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth order polynomial: x6 − 9x3 + 8 = 0.

Sixth order polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written

(x3)2 − 9(x3) + 8 = 0

(this is a simple case of a polynomial√ decomposition). Thus the equation may be simplified by defining a new variable x3 = u. Substituting x by 3 u into the polynomial gives u2 − 9u + 8 = 0, which is just a quadratic equation with solutions: u = 1 and u = 8.

The solutions in terms of the original variable are obtained by substituting x3 back in for u: x3 = 1 and x3 = 8.

Then, assuming that x is real, x = (1)1/3 = 1 and x = (8)1/3 = 2.

32 10.1. SIMPLE EXAMPLE 33

10.1 Simple example

Consider the system of equations

xy + x + y = 71

x2y + xy2 = 880 where x and y are positive integers with x > y . (Source: 1991 AIME) Solving this normally is not terrible, but it may get a little tedious. However, we can rewrite the second equation as xy(x + y) = 880 . Making the substitution s = x + y, t = xy reduces the system to s + t = 71, st = 880. Solving this gives (s, t) = (16, 55) or (s, t) = (55, 16). Back-substituting the first ordered pair gives us x + y = 16, xy = 55 , which easily gives the solution (x, y) = (11, 5). Back-substituting the second ordered pair gives us x + y = 55, xy = 16 , which gives no solutions. Hence the solution that solves the system is (x, y) = (11, 5) .

10.2 Formal introduction

Let A , B be smooth manifolds and let Φ: A → B be a Cr -diffeomorphism between them, that is: Φ is a r times continuously differentiable, bijective map from A to B with r times continuously differentiable inverse from B to A . Here r may be any natural number (or zero), ∞ (smooth) or ω (analytic). The map Φ is called a regular coordinate transformation or regular variable substitution, where regular refers to the Cr -ness of Φ . Usually one will write x = Φ(y) to indicate the replacement of the variable x by the variable y by substituting the value of Φ in y for every occurrence of x .

10.3 Other examples

10.3.1 Coordinate transformation

Some systems can be more easily solved when switching to cylindrical coordinates. Consider for example the equation

√ x2 U(x, y, z) := (x2 + y2) 1 − = 0. x2 + y2

This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution

(x, y, z) = Φ(r, θ, z) given by Φ(r, θ, z) = (r cos(θ), r sin(θ), z) .

Note that if θ runs outside a 2π -length interval, for example, [0, 2π] , the map Φ is no longer bijective. Therefore Φ should be limited to, for example (0, ∞] × [0, 2π) × [−∞, ∞] . Notice how r = 0 is excluded, for Φ is not bijective in the origin ( θ can take any value, the point will be mapped to (0, 0, z)). Then, replacing all occurrences of the original variables by the new expressions prescribed by Φ and using the identity sin2 x + cos2 x = 1 , we get

√ √ r2 cos2 θ V (r, θ, z) = r2 1 − = r2 1 − cos2 θ = r2 |sin θ| r2 Now the solutions can be readily found: sin(θ) = 0 , so θ = 0 or θ = π . Applying the inverse of Φ shows that this is equivalent to y = 0 while x ≠ 0 . Indeed we see that for y = 0 the function vanishes, except for the origin. Note that, had we allowed r = 0 , the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of Φ is crucial. Note also that the function is always positive (for x, y, z ∈ R ), hence the absolute values. 34 CHAPTER 10. CHANGE OF VARIABLES

10.3.2 Differentiation

Main article: Chain rule

The chain rule is used to simplify complicated differentiation. For example, to calculate the derivative

d ( ) sin(x2) dx the variable x may be changed by introducing x2 = u. Then, by the chain rule:

d d du d d d ( ) d d = = (u) = x2 = 2x dx du dx dx du dx du du so that

d ( ) d sin(x2) = 2x (sin(u)) = 2x cos(x2) dx du where in the very last step u has been replaced with x2.

10.3.3 Integration

Main article: Integration by substitution

Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant. Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems.

10.3.4 Differential equations

Variable changes for differentiation and integration are taught in elementary calculus and the steps are rarely carried out in full. The very broad use of variable changes is apparent when considering differential equations, where the independent variables may be changed using the chain rule or the dependent variables are changed resulting in some differentiation to be carried out. Exotic changes, such as the mingling of dependent and independent variables in point and contact transformations, can be very complicated but allow much freedom. Very often, a general form for a change is substituted into a problem and parameters picked along the way to best simplify the problem.

10.3.5 Scaling and shifting

Probably the simplest change is the scaling and shifting of variables, that is replacing them with new variables that are “stretched” and “moved” by constant amounts. This is very common in practical applications to get physical parameters out of problems. For an nth order derivative, the change simply results in

n n d y yscale d yˆ n = n n dx xscale dxˆ where 10.3. OTHER EXAMPLES 35

x =xx ˆ scale + xshift

y =yy ˆ scale + yshift. This may be shown readily through the chain rule and linearity of differentiation. This change is very common in practical applications to get physical parameters out of problems, for example, the boundary value problem

d2u dp µ = ; u(0) = u(L) = 0 dy2 dx describes parallel fluid flow between flat solid walls separated by a distance δ; µ is the viscosity and dp/dx the pressure gradient, both constants. By scaling the variables the problem becomes

d2uˆ = 1 ;u ˆ(0) =u ˆ(1) = 0 dyˆ2 where

L2 dp y =yL ˆ and u =u ˆ . µ dx Scaling is useful for many reasons. It simplifies analysis both by reducing the number of parameters and by simply making the problem neater. Proper scaling may normalize variables, that is make them have a sensible unitless range such as 0 to 1. Finally, if a problem mandates numeric solution, the fewer the parameters the fewer the number of computations.

10.3.6 Momentum vs. velocity

Consider a system of equations

∂H mv˙ = − ∂x ∂H mx˙ = ∂v for a given function H(x, v) . The mass can be eliminated by the (trivial) substitution Φ(p) = 1/m · v . Clearly this is a bijective map from R to R . Under the substitution v = Φ(p) the system becomes

∂H p˙ = − ∂x ∂H x˙ = ∂p

10.3.7 Lagrangian mechanics

Main article: Lagrangian mechanics

Given a force field ϕ(t, x, v) , Newton's equations of motion are mx¨ = ϕ(t, x, v) 36 CHAPTER 10. CHANGE OF VARIABLES

Lagrange examined how these equations of motion change under an arbitrary substitution of variables x = Ψ(t, y) , ∂Ψ(t,y) ∂Ψ(t,y) · v = ∂t + ∂y w . He found that the equations

∂L d ∂L = ∂y dt ∂w are equivalent to Newton’s equations for the function L = T −V , where T is the kinetic, and V the potential energy. In fact, when the substitution is chosen well (exploiting for example symmetries and constraints of the system) these equations are much easier to solve than Newton’s equations in Cartesian coordinates.

10.4 See also

• Change of variables (PDE) • Substitution property of equality

• Instantiation of universals Chapter 11

Commutative property

For other uses, see Commute (disambiguation). In mathematics, a binary operation is commutative if changing the order of the operands does not change the result.

=

= =

This image illustrates that addition is commutative.

It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says “3 + 4 = 4 + 3” or “2 × 5 = 5 × 2”, the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction that do not have it (for example, “3 − 5 ≠ 5 − 3”), such operations are not commutative, or noncommutative operations. The idea that simple operations, such as multiplication and addition of numbers, are commutative was for many years implicitly assumed and the property was not named until the 19th century when mathematics started to become formalized.

11.1 Common uses

The commutative property (or commutative law) is a property generally associated with binary operations and functions. If the commutative property holds for a pair of elements under a certain binary operation then the two elements are said to commute under that operation.

37 38 CHAPTER 11. COMMUTATIVE PROPERTY

11.2 Mathematical definitions

Further information: Symmetric function

The term “commutative” is used in several related senses.[1][2]

1. A binary operation ∗ on a set S is called commutative if: x ∗ y = y ∗ x for all x, y ∈ S An operation that does not satisfy the above property is called non-commutative. 2. One says that x commutes with y under ∗ if: x ∗ y = y ∗ x

3. A binary function f : A × A → B is called commutative if: f(x, y) = f(y, x) for all x, y ∈ A

11.3 Examples

11.3.1 Commutative operations in everyday life

• Putting on socks resembles a commutative operation, since which sock is put on first is unimportant. Either way, the result (having both socks on), is the same. • The commutativity of addition is observed when paying for an item with cash. Regardless of the order the bills are handed over in, they always give the same total.

11.3.2 Commutative operations in mathematics

Two well-known examples of commutative binary operations:[1]

• The addition of real numbers is commutative, since

y + z = z + y for all y, z ∈ R For example 4 + 5 = 5 + 4, since both expressions equal 9.

• The multiplication of real numbers is commutative, since

yz = zy for all y, z ∈ R For example, 3 × 5 = 5 × 3, since both expressions equal 15.

• Some binary truth functions are also commutative, since the truth tables for the functions are the same when one changes the order of the operands.

For example, the logical biconditional function p ↔ q is equivalent to q ↔ p. This function is also written as p IFF q, or as p ≡ q, or as Epq. The last form is an example of the most concise notation in the article on truth functions, which lists the sixteen possible binary truth functions of which eight are commutative: Vpq = Vqp;Apq (OR) = Aqp; Dpq (NAND) = Dqp;Epq (IFF) = Eqp;Jpq = Jqp;Kpq (AND) = Kqp;Xpq (NOR) = Xqp;Opq = Oqp.

• Further examples of commutative binary operations include addition and multiplication of complex numbers, addition and scalar multiplication of vectors, and intersection and union of sets. 11.3. EXAMPLES 39

b

a a a+b

b

The addition of vectors is commutative, because ⃗a +⃗b = ⃗b + ⃗a .

11.3.3 Noncommutative operations in everyday life

• Concatenation, the act of joining character strings together, is a noncommutative operation. For example

EA + T = EAT ≠ TEA = T + EA

• Washing and drying clothes resembles a noncommutative operation; washing and then drying produces a markedly different result to drying and then washing.

• Rotating a book 90° around a vertical axis then 90° around a horizontal axis produces a different orientation than when the rotations are performed in the opposite order.

• The twists of the Rubik’s Cube are noncommutative. This can be studied using group theory.

• Also thought processes are noncommutative: A person asked a question (A) and then a question (B) may give different answers to each question than a person asked first (B) and then (A), because asking a question may change the person’s state of mind. 40 CHAPTER 11. COMMUTATIVE PROPERTY

11.3.4 Noncommutative operations in mathematics

Some noncommutative binary operations:[3]

• Subtraction is noncommutative, since 0 − 1 ≠ 1 − 0

• Division is noncommutative, since 1/2 ≠ 2/1

• Some truth functions are noncommutative, since the truth tables for the functions are different when one changes the order of the operands.

For example, the truth tables for f (A,B) = A Λ ¬B (A AND NOT B) and f (B,A) = B Λ ¬A are

• Matrix multiplication is noncommutative since [ ] [ ] [ ] [ ] [ ] [ ] 0 2 1 1 0 1 0 1 1 1 0 1 = · ≠ · = 0 1 0 1 0 1 0 1 0 1 0 1

• The vector product (or cross product) of two vectors in three dimensions is anti-commutative, i.e., b × a = −(a × b).

11.4 History and etymology

The first known use of the term was in a French Journal published in 1814

Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commuta- tive property of multiplication to simplify computing products.[4][5] Euclid is known to have assumed the commutative property of multiplication in his book Elements.[6] Formal uses of the commutative property arose in the late 18th and early 19th centuries, when mathematicians began to work on a theory of functions. Today the commutative property is a well known and basic property used in most branches of mathematics. The first recorded use of the term commutative was in a memoir by François Servois in 1814,[7][8] which used the word commutatives when describing functions that have what is now called the commutative property. The word is a combination of the French word commuter meaning “to substitute or switch” and the suffix -ative meaning “tending to” so the word literally means “tending to substitute or switch.” The term then appeared in English in Philosophical Transactions of the Royal Society in 1844.[7] 11.5. PROPOSITIONAL LOGIC 41

11.5 Propositional logic

11.5.1 Rule of replacement

In truth-functional propositional logic, commutation,[9][10] or commutativity[11] refer to two valid rules of replacement. The rules allow one to transpose propositional variables within logical expressions in logical proofs. The rules are:

(P ∨ Q) ⇔ (Q ∨ P ) and

(P ∧ Q) ⇔ (Q ∧ P ) where " ⇔ " is a metalogical symbol representing “can be replaced in a proof with.”

11.5.2 Truth functional connectives

Commutativity is a property of some logical connectives of truth functional propositional logic. The following logical equivalences demonstrate that commutativity is a property of particular connectives. The following are truth-functional tautologies.

Commutativity of conjunction (P ∧ Q) ↔ (Q ∧ P )

Commutativity of disjunction (P ∨ Q) ↔ (Q ∨ P )

Commutativity of implication (also called the law of permutation (P → (Q → R)) ↔ (Q → (P → R))

Commutativity of equivalence (also called the complete commutative law of equivalence) (P ↔ Q) ↔ (Q ↔ P )

11.6 Set theory

In group and set theory, many algebraic structures are called commutative when certain operands satisfy the com- mutative property. In higher branches of mathematics, such as analysis and linear algebra the commutativity of well-known operations (such as addition and multiplication on real and complex numbers) is often used (or implicitly assumed) in proofs.[12][13][14]

11.7 Mathematical structures and commutativity

• A commutative semigroup is a set endowed with a total, associative and commutative operation.

• If the operation additionally has an identity element, we have a commutative monoid

• An abelian group, or commutative group is a group whose group operation is commutative.[13]

• A commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.)[15]

• In a field both addition and multiplication are commutative.[16] 42 CHAPTER 11. COMMUTATIVE PROPERTY

11.8 Related properties

11.8.1 Associativity

Main article: Associative property

The associative property is closely related to the commutative property. The associative property of an expression containing two or more occurrences of the same operator states that the order operations are performed in does not affect the final result, as long as the order of terms doesn't change. In contrast, the commutative property states that the order of the terms does not affect the final result. Most commutative operations encountered in practice are also associative. However, commutativity does not imply associativity. A counterexample is the function

x + y f(x, y) = , 2 which is clearly commutative (interchanging x and y does not affect the result), but it is not associative (since, for example, f(−4, f(0, +4)) = −1 but f(f(−4, 0), +4) = +1 ). More such examples may be found in Commutative non-associative magmas.

11.8.2 Symmetry 10 8 6 4 2 0 -2 -4 -6 -8 -10 Graph showing the symmetry of the addition function

Main article: Symmetry in mathematics 11.9. NON-COMMUTING OPERATORS IN QUANTUM MECHANICS 43

Some forms of symmetry can be directly linked to commutativity. When a commutative operator is written as a binary function then the resulting function is symmetric across the line y = x. As an example, if we let a function f represent addition (a commutative operation) so that f(x,y) = x + y then f is a symmetric function, which can be seen in the image on the right. For relations, a symmetric relation is analogous to a commutative operation, in that if a relation R is symmetric, then aRb ⇔ bRa .

11.9 Non-commuting operators in quantum mechanics

Main article: Canonical commutation relation

In quantum mechanics as formulated by Schrödinger, physical variables are represented by linear operators such as x d (meaning multiply by x), and dx . These two operators do not commute as may be seen by considering the effect of d d their compositions x dx and dx x (also called products of operators) on a one-dimensional wave function ψ(x) :

d d x ψ = xψ′ ≠ xψ = ψ + xψ′ dx dx

According to the uncertainty principle of Heisenberg, if the two operators representing a pair of variables do not commute, then that pair of variables are mutually complementary, which means they cannot be simultaneously mea- sured or known precisely. For example, the position and the linear momentum in the x-direction of a particle are − ℏ ∂ ℏ represented respectively by the operators x and i ∂x (where is the reduced Planck constant). This is the same example except for the constant −iℏ , so again the operators do not commute and the physical meaning is that the position and linear momentum in a given direction are complementary.

11.10 See also

• Anticommutativity

• Associative Property

• Binary operation

• Centralizer or Commutant

• Commutative diagram

• Commutative (neurophysiology)

• Commutator

• Distributivity

• Parallelogram law

• Particle statistics (for commutativity in physics)

• Quasi-commutative property

• Trace monoid

• Truth function

• Truth table 44 CHAPTER 11. COMMUTATIVE PROPERTY

11.11 Notes

[1] Krowne, p.1

[2] Weisstein, Commute, p.1

[3] Yark, p.1

[4] Lumpkin, p.11

[5] Gay and Shute, p.?

[6] O'Conner and Robertson, Real Numbers

[7] Cabillón and Miller, Commutative and Distributive

[8] O'Conner and Robertson, Servois

[9] Moore and Parker

[10] Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

[11] Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing.

[12] Axler, p.2

[13] Gallian, p.34

[14] p. 26,87

[15] Gallian p.236

[16] Gallian p.250

11.12 References

11.12.1 Books

• Axler, Sheldon (1997). Linear Algebra Done Right, 2e. Springer. ISBN 0-387-98258-2.

Abstract algebra theory. Covers commutativity in that context. Uses property throughout book.

• Copi, Irving M.; Cohen, Carl (2005). Introduction to Logic. Prentice Hall.

• Gallian, Joseph (2006). Contemporary Abstract Algebra, 6e. Boston, Mass.: Houghton Mifflin. ISBN 0-618- 51471-6.

Linear algebra theory. Explains commutativity in chapter 1, uses it throughout.

• Goodman, Frederick (2003). Algebra: Abstract and Concrete, Stressing Symmetry, 2e. Prentice Hall. ISBN 0-13-067342-0.

Abstract algebra theory. Uses commutativity property throughout book.

• Hurley, Patrick (1991). A Concise Introduction to Logic 4th edition. Wadsworth Publishing. 11.12. REFERENCES 45

11.12.2 Articles

• http://www.ethnomath.org/resources/lumpkin1997.pdf Lumpkin, B. (1997). The Mathematical Legacy Of Ancient Egypt - A Response To Robert Palter. Unpublished manuscript.

Article describing the mathematical ability of ancient civilizations.

• Robins, R. Gay, and Charles C. D. Shute. 1987. The Rhind Mathematical Papyrus: An Ancient Egyptian Text. London: British Museum Publications Limited. ISBN 0-7141-0944-4

Translation and interpretation of the Rhind Mathematical Papyrus.

11.12.3 Online resources

• Hazewinkel, Michiel, ed. (2001), “Commutativity”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4

• Krowne, Aaron, Commutative at PlanetMath.org., Accessed 8 August 2007.

Definition of commutativity and examples of commutative operations

• Weisstein, Eric W., “Commute”, MathWorld., Accessed 8 August 2007.

Explanation of the term commute

• Yark. Examples of non-commutative operations at PlanetMath.org., Accessed 8 August 2007

Examples proving some noncommutative operations

• O'Conner, J J and Robertson, E F. MacTutor history of real numbers, Accessed 8 August 2007

Article giving the history of the real numbers

• Cabillón, Julio and Miller, Jeff. Earliest Known Uses Of Mathematical Terms, Accessed 22 November 2008

Page covering the earliest uses of mathematical terms

• O'Conner, J J and Robertson, E F. MacTutor biography of François Servois, Accessed 8 August 2007

Biography of Francois Servois, who first used the term Chapter 12

Completing the square

Animation depicting the process of completing the square. (Details, animated GIF version)

In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form

ax2 + bx + c

to the form

a(······ )2 + constant.

In this context, “constant” means not depending on x. The expression inside the parenthesis is of the form (x + constant). Thus

46 12.1. OVERVIEW 47

ax2 + bx + c a(x + h)2 + k for some values of h and k. Completing the square is used in

• solving quadratic equations,

• graphing quadratic functions,

• evaluating integrals in calculus, such as Gaussian integrals with a linear term in the exponent

• finding Laplace transforms.

In mathematics, completing the square is considered a basic algebraic operation, and is often applied without remark in any computation involving quadratic polynomials. Completing the square is also used to derive the quadratic formula.

12.1 Overview

12.1.1 Background

There is a simple formula in elementary algebra for computing the square of a binomial:

(x + p)2 = x2 + 2px + p2.

For example:

(x + 3)2 = x2 + 6x + 9 (p = 3) (x − 5)2 = x2 − 10x + 25 (p = −5).

In any perfect square, the number p is always half the coefficient of x, and the constant term is equal to p2.

12.1.2 Basic example

Consider the following quadratic polynomial: x2 + 10x + 28.

This quadratic is not a perfect square, since 28 is not the square of 5:

(x + 5)2 = x2 + 10x + 25.

However, it is possible to write the original quadratic as the sum of this square and a constant: x2 + 10x + 28 = (x + 5)2 + 3.

This is called completing the square. 48 CHAPTER 12. COMPLETING THE SQUARE

12.1.3 General description

Given any monic quadratic

x2 + bx + c, it is possible to form a square that has the same first two terms:

( ) 1 2 2 1 2 x + 2 b = x + bx + 4 b . This square differs from the original quadratic only in the value of the constant term. Therefore, we can write

( ) 2 1 2 x + bx + c = x + 2 b + k, where k is a constant. This operation is known as completing the square. For example:

x2 + 6x + 11 = (x + 3)2 + 2 x2 + 14x + 30 = (x + 7)2 − 19 x2 − 2x + 7 = (x − 1)2 + 6.

12.1.4 Non-monic case

Given a quadratic polynomial of the form

ax2 + bx + c

it is possible to factor out the coefficient a, and then complete the square for the resulting monic polynomial. Example:

3x2 + 12x + 27 = 3(x2 + 4x + 9) ( ) = 3 (x + 2)2 + 5 = 3(x + 2)2 + 15 This allows us to write any quadratic polynomial in the form

a(x − h)2 + k.

12.1.5 Formula

The result of completing the square may be written as a formula. For the general case:[1]

b b2 ax2 + bx + c = a(x − h)2 + k, where h = − and k = c − ah2 = c − . 2a 4a Specifically, when a=1:

b b2 x2 + bx + c = (x − h)2 + k, where h = − and k = c − . 2 4 12.2. RELATION TO THE GRAPH 49

The matrix case looks very similar:

1 1 xTAx + xTb + c = (x − h)TA(x − h) + k where h = − A−1b and k = c − bTA−1b 2 4 where A has to be symmetric. If A is not symmetric the formulae for h and k have to be generalized to: h = −(A + AT)−1b and k = c − hTAh = c − bT(A + AT)−1A(A + AT)−1b

12.2 Relation to the graph

f(x)=(x−h)2 100

80

60 f(x) 40 h=0 h=5 20 h=10 h=15 0 −30−20−10 0 10 20 x Graphs of quadratic functions shifted to the right by h = 0, 5, 10, and 15. f(x)=x2+k 50

40

30 f(x) 20 k=0 k=5 10 k=10 k=15 0 −30−20−10 0 10 20 x Graphs of quadratic functions shifted upward by k = 0, 5, 10, and 15. f(x)=(x−h)2+k 50

40

30 f(x) 20 h=k=0 h=k=5 10 h=k=10 h=k=15 0 −30−20−10 0 10 20 x Graphs of quadratic functions shifted upward and to the right by 0, 5, 10, and 15. 50 CHAPTER 12. COMPLETING THE SQUARE

In analytic geometry, the graph of any quadratic function is a parabola in the xy-plane. Given a quadratic polynomial of the form

(x − h)2 + k or a(x − h)2 + k the numbers h and k may be interpreted as the Cartesian coordinates of the vertex of the parabola. That is, h is the x-coordinate of the axis of symmetry, and k is the minimum value (or maximum value, if a < 0) of the quadratic function. One way to see this is to note that the graph of the function ƒ(x) = x2 is a parabola whose vertex is at the origin (0, 0). Therefore, the graph of the function ƒ(x − h) = (x − h)2 is a parabola shifted to the right by h whose vertex is at (h, 0), as shown in the top figure. In contrast, the graph of the function ƒ(x) + k = x2 + k is a parabola shifted upward by k whose vertex is at (0, k), as shown in the center figure. Combining both horizontal and vertical shifts yields ƒ(x − h) + k = (x − h)2 + k is a parabola shifted to the right by h and upward by k whose vertex is at (h, k), as shown in the bottom figure.

12.3 Solving quadratic equations

Completing the square may be used to solve any quadratic equation. For example: x2 + 6x + 5 = 0,

The first step is to complete the square:

(x + 3)2 − 4 = 0.

Next we solve for the squared term:

(x + 3)2 = 4.

Then either x + 3 = −2 or x + 3 = 2, and therefore x = −5 or x = −1.

This can be applied to any quadratic equation. When the x2 has a coefficient other than 1, the first step is to divide out the equation by this coefficient: for an example see the non-monic case below.

12.3.1 Irrational and complex roots

Unlike methods involving factoring the equation, which is reliable only if the roots are rational, completing the square will find the roots of a quadratic equation even when those roots are irrational or complex. For example, consider the equation x2 − 10x + 18 = 0.

Completing the square gives 12.4. OTHER APPLICATIONS 51

(x − 5)2 − 7 = 0,

so

(x − 5)2 = 7.

Then either

√ √ x − 5 = − 7 or x − 5 = 7, so

√ √ x = 5 − 7 or x = 5 + 7.

In terser language:

√ x = 5  7.

Equations with complex roots can be handled in the same way. For example:

x2 + 4x + 5 = 0 (x + 2)2 + 1 = 0 (x + 2)2 = −1 x + 2 = i x = −2  i.

12.3.2 Non-monic case

For an equation involving a non-monic quadratic, the first step to solving them is to divide through by the coefficient of x2. For example:

2x2 + 7x + 6 = 0 2 7 x + 2 x + 3 = 0 ( ) 7 2 − 1 x + 4 16 = 0 ( ) 7 2 1 x + 4 = 16 7 1 7 − 1 x + 4 = 4 or x + 4 = 4 − 3 − x = 2 or x = 2.

12.4 Other applications

12.4.1 Integration

Completing the square may be used to evaluate any integral of the form 52 CHAPTER 12. COMPLETING THE SQUARE

∫ dx ax2 + bx + c using the basic integrals

∫ ∫ ( ) dx 1 x − a dx 1 x = ln + C and = arctan + C. x2 − a2 2a x + a x2 + a2 a a For example, consider the integral

∫ dx . x2 + 6x + 13 Completing the square in the denominator gives:

∫ ∫ dx dx = . (x + 3)2 + 4 (x + 3)2 + 22 This can now be evaluated by using the substitution u = x + 3, which yields

∫ ( ) dx 1 x + 3 = arctan + C. (x + 3)2 + 4 2 2

12.4.2 Complex numbers

Consider the expression

|z|2 − b∗z − bz∗ + c,

where z and b are complex numbers, z* and b* are the complex conjugates of z and b, respectively, and c is a real number. Using the identity |u|2 = uu* we can rewrite this as

|z − b|2 − |b|2 + c,

which is clearly a real quantity. This is because

|z − b|2 = (z − b)(z − b)∗ = (z − b)(z∗ − b∗) = zz∗ − zb∗ − bz∗ + bb∗ = |z|2 − zb∗ − bz∗ + |b|2.

As another example, the expression

ax2 + by2 + c,

where a, b, c, x, and y are real numbers, with a > 0 and b > 0, may be expressed in terms of the square of the absolute value of a complex number. Define

√ √ z = a x + i b y. 12.5. GEOMETRIC PERSPECTIVE 53

Then

|z|2 = zz∗ √ √ √ √ = ( a x + i b y)( a x − i b y) √ √ = ax2 − i ab xy + i ba yx − i2by2 = ax2 + by2, so

ax2 + by2 + c = |z|2 + c.

12.4.3 Idempotent matrix

A matrix M is idempotent when M 2 = M. Idempotent matrices generalize the idempotent properties of 0 and 1. The completion of the square method of addressing the equation

a2 + b2 = a, shows that some idempotent 2 × 2 matrices are parametrized by a circle in the (a,b)-plane: ( ) a b The matrix will be idempotent provided a2 + b2 = a, which, upon completing the square, becomes b 1 − a

− 1 2 2 1 (a 2 ) + b = 4 . In the (a,b)-plane, this is the equation of a circle with center (1/2, 0) and radius 1/2.

12.5 Geometric perspective

Consider completing the square for the equation

x2 + bx = a. Since x2 represents the area of a square with side of length x, and bx represents the area of a rectangle with sides b and x, the process of completing the square can be viewed as visual manipulation of rectangles. Simple attempts to combine the x2 and the bx rectangles into a larger square result in a missing corner. The term (b/2)2 added to each side of the above equation is precisely the area of the missing corner, whence derives the terminology “completing the square”.

12.6 A variation on the technique

As conventionally taught, completing the square consists of adding the third term, v 2 to

u2 + 2uv to get a square. There are also cases in which one can add the middle term, either 2uv or −2uv, to

u2 + v2 to get a square. 54 CHAPTER 12. COMPLETING THE SQUARE

12.6.1 Example: the sum of a positive number and its reciprocal

By writing

( ) 1 1 x + = x − 2 + + 2 x x ( ) √ 1 2 = x − √ + 2 x

we show that the sum of a positive number x and its reciprocal is always greater than or equal to 2. The square of a real expression is always greater than or equal to zero, which gives the stated bound; and here we achieve 2 just when x is 1, causing the square to vanish.

12.6.2 Example: factoring a simple quartic polynomial

Consider the problem of factoring the polynomial

x4 + 324.

This is

(x2)2 + (18)2,

so the middle term is 2(x2)(18) = 36x2. Thus we get

x4 + 324 = (x4 + 36x2 + 324) − 36x2 = (x2 + 18)2 − (6x)2 = squares two of difference a = (x2 + 18 + 6x)(x2 + 18 − 6x) = (x2 + 6x + 18)(x2 − 6x + 18)

(the last line being added merely to follow the convention of decreasing degrees of terms).

12.7 References

[1] Narasimhan, Revathi (2008). Precalculus: Building Concepts and Connections. Cengage Learning. pp. 133–134. ISBN 0-618-41301-4., Section Formula for the Vertex of a Quadratic Function, page 133–134, figure 2.4.8

• Algebra 1, Glencoe, ISBN 0-07-825083-8, pages 539–544 • Algebra 2, Saxon, ISBN 0-939798-62-X, pages 214–214, 241–242, 256–257, 398–401

12.8 External links

• Completing the square at PlanetMath.org. • How to Complete the Square, Education Portal Academy 12.8. EXTERNAL LINKS 55 Chapter 13

Constant term

In mathematics, a constant term is a term in an algebraic expression that has a value that is constant or cannot change, because it does not contain any modifiable variables. For example, in the quadratic polynomial

x2 + 2x + 3,

the 3 is a constant term. After like terms are combined, an algebraic expression will have at most one constant term. Thus, it is common to speak of the quadratic polynomial

ax2 + bx + c,

where x is the variable, and has a constant term of c. If c = 0, then the constant term will not actually appear when the quadratic is written. It is notable that a term that is constant, with a constant as a multiplicative coefficient added to it (although this expression could be more simply written as their product), still constitutes a constant term as a variable is still not present in the new term. Although the expression is modified, the term (and coefficient) itself classifies as constant. However, should this introduced coefficient contain a variable, while the original number has a constant meaning, this has no bearing if the new term stays constant as the introduced coefficient will always override the constant expression - for example, in (x + 1)(x − 2) when x is multiplied by 2, the result, 2x, is not constant; while 1 * −2 is −2 and still a constant. Any polynomial written in standard form has a unique constant term, which can be considered a coefficient of x0. In particular, the constant term will always be the lowest degree term of the polynomial. This also applies to multivariate polynomials. For example, the polynomial

x2 + 2xy + y2 − 2x + 2y − 4

has a constant term of −4, which can be considered to be the coefficient of x0y0, where the variables are become eliminated by exponentiated to 0 (any number exponentiated to 0 becomes 1). For any polynomial, the constant term can be obtained by substituting in 0 instead of each variable; thus, eliminating each variable. The concept of exponentiation to 0 can be extended to power series and other types of series, for example in this power series:

2 3 a0 + a1x + a2x + a3x + ··· , a0 is the constant term. In general a constant term is one that does not involve any variables at all. However in expressions that involve terms with other types of factors than constants and powers of variables, the notion of constant term cannot be used in this sense, since that would lead to calling “4” the constant term of (x − 3)2 + 4 , whereas substituting 0 for x in this polynomial makes it evaluate to 13.

56 13.1. SEE ALSO 57

13.1 See also

• Constant (mathematics) Chapter 14

Cube root

2.5 2.0 1.5 1.0 0.5

0 2 4 6 8 10 √ Plot of y = 3 x for x ≥ 0 . Complete plot is symmetric with respect to origin, as it is an odd function. At x = 0 this graph has a vertical tangent.

In mathematics, a cube root of a number x is a number a such that a3 = x. All real numbers (except zero) have exactly one real cube root and a pair of complex conjugate cube roots, and√ all nonzero complex numbers have three distinct complex cube roots.√ For example,√ the real cube root of 8, denoted 3 8, is 2, because 23 = 8, while the other cube roots of 8 are −1 + 3i and −1 − 3i. The three cube roots of −27i are

√ √ 3 3 3 3 3 3 3i, − i, and − − i. 2 2 2 2

The cube root operation is not associative or distributive with addition or subtraction.

In some contexts, particularly when the number whose cube root is to be taken is a real number, one of the cube√ roots (in this particular case the real one) is referred to as the principal cube root, denoted with the radical sign 3 . The cube root operation is associative with exponentiation and distributive with multiplication and division if considering only real numbers, but not always if considering√ complex numbers:√ for example, the cube of any cube root of 8 is 8, but the three cube roots of 83 are 8, −4 + 4i 3, and − 4 − 4i 3.

58 14.1. FORMAL DEFINITION 59

14.1 Formal definition

The cube roots of a number x are the numbers y which satisfy the equation

y3 = x.

14.1.1 Real numbers

For any real number y, there is one real number x such that x3 = y. The cube function is increasing, so does not give the same result for two different inputs, plus it covers all real numbers. In other words, it is a bijection, or one-to-one. Then we can define an inverse function that is also one-to-one. For real numbers, we can define a unique cube root of all real numbers. If this definition is used, the cube root of a negative number is a negative number.

=

The three cube roots of 1

If x and y are allowed to be complex, then there are three solutions (if x is non-zero) and so x has three cube roots. A real number has one real cube root and two further cube roots which form a complex conjugate pair. This can lead to some interesting results. For instance, the cube roots of the number one are:

  √  1 √ 3 1 = − 1 + 3 i  2 √2 − 1 − 3 2 2 i. 60 CHAPTER 14. CUBE ROOT

The last two of these roots lead to a relationship between all roots of any real or complex number. If a number is one cube root of any real or complex number, the other two cube roots can be found by multiplying that number by one or the other of the two complex cube roots of one.

14.1.2 Complex numbers

Plot of the complex cube root together with its two additional leaves. The first picture shows the main branch which is described in the text

For complex numbers, the principal cube root is usually defined by

1/3 1 x = exp( 3 ln x) where ln(x) is the principal branch of the natural logarithm. If we write x as x = r exp(iθ) where r is a non-negative real number and θ lies in the range

−π < θ ≤ π then the principal complex cube root is

√ √ 3 3 1 x = r exp( 3 iθ). This means that in polar coordinates, we are taking the cube root of the radius and dividing the polar angle by three in order to define√ a cube root. With this definition, the principal√ cube root of a negative number is a complex number, and for instance 3 −8 will not be −2 , but rather 1 + i 3. This limitation can easily be avoided if we write the original complex number x in three equivalent forms, namely

 ( )  r exp (i(θ) , ) x = r exp i(θ + 2π) ,  ( ) r exp i(θ − 2π) .

The principal complex cube roots of these three forms are then respectively

√ ( )  3 r exp i( 1 θ) , √ √ ( 3 ) 3 x = 3 r exp i( 1 θ + 2 π) , √ ( 3 3 ) 3 1 − 2 r exp i( 3 θ 3 π) . 14.2. IMPOSSIBILITY OF COMPASS-AND-STRAIGHTEDGE CONSTRUCTION 61

Riemann surface of the cube root. One can see how all three leaves fit together

In general,√ these three complex numbers are distinct, even though the three representations of x were the same. For example, 3 −8 may then be calculated to be −2, 1 + i√3, or 1 − i√3. In programs that are aware of the imaginary plane, the graph of the cube root of x on the real plane will not display any output for negative values of x. To also include negative roots, these programs must be explicitly instructed to only use real numbers.

14.2 Impossibility of compass-and-straightedge construction

Cube roots arise in the problem of finding an angle whose measure is one third that of a given angle (angle trisection) and in the problem of finding the edge of a cube whose volume is twice that of a cube with a given edge (doubling the cube). In 1837 Pierre Wantzel proved that neither of these can be done with a compass-and-straightedge construction. 62 CHAPTER 14. CUBE ROOT

14.3 Numerical methods

Newton’s method is an iterative method that can be used to calculate the cube root. For real floating point numbers this method reduces to the following iterative algorithm to produce successively better approximations of the cube root of a :

( ) 1 a xn+1 = 2 + 2xn . 3 xn × × a The method is simply averaging three factors chosen such that xn xn 2 = a at each iteration. xn Halley’s method improves upon this with an algorithm that converges more quickly with each step, albeit consuming more multiplication operations:

( ) 3 xn + 2a xn+1 = xn 3 . 2xn + a

With either method a poor initial approximation of x0 can give very poor algorithm performance, and coming up with a good initial approximation is somewhat of a black art. Some implementations manipulate the exponent bits of the floating point number; i.e. they arrive at an initial approximation by dividing the exponent by 3. This has the disadvantage of requiring knowledge of the internal representation of the floating point number, and therefore a single implementation is not guaranteed to work across all computing platforms. Also useful is this generalized continued fraction, based on the nth root method: If x is a good first approximation to the cube root of z and y = z − x3, then:

√ √ y 3 z = 3 x3 + y = x + 2y 3x2 + 4y 2x + 5y 9x2 + 7y 2x + 8y 15x2 + . 2x + .. 2x · y = x + . 2 · 4y2 3(2z − y) − y − 5 · 7y2 9(2z − y) − 8 · 10y2 15(2z − y) − . 21(2z − y) − .. The second equation combines each pair of fractions from the first into a single fraction, thus doubling the speed of convergence. The advantage is that x and y are only computed once.

14.4 Appearance in solutions of third and fourth degree equations

Cubic equations, which are polynomial equations of the third degree (meaning the highest power of the unknown is 3) can always be solved for their three solutions in terms of cube roots and square roots (although simpler expressions only in terms of square roots exist for all three solutions if at least one of them is a rational number). If two of the solutions are complex numbers, then all three solution expressions involve the real cube roots of two real numbers, while if all three solutions are real numbers then each solution is expressed in terms of the complex cube roots of two complex numbers. Quartic equations can also be solved in terms of cube roots and square roots. 14.5. HISTORY 63

14.5 History

The calculation of cube roots can be to traced back to Babylonian mathematicians from as early as 1800 BCE.[1] In the fourth century BCE Plato posed the problem of doubling the cube, which required a compass-and-straightedge construction of the edge of a cube with twice the volume of a given cube; this required the construction, now known to be impossible, of the length the cube root of 2. A method for extracting cube roots appears in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BCE and commented on by in the 3rd century CE.[2] The Greek mathematician Hero of Alexandria devised a method for calculating cube roots in the 1st century CE. His formula is again mentioned by Eutokios in a commentary on Archimedes.[3] In 499 CE Aryabhata, a mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, gave a method for finding the cube root of numbers having many digits in the Aryabhatiya (section 2.5).[4]

14.6 See also

• Methods of computing square roots • List of polynomial topics

• Nth root • Square root

• Nested radical • Root of unity

• Shifting nth-root algorithm

14.7 References

[1] Saggs, H. W. F. (1989). Civilization Before Greece and Rome. Yale University Press. p. 227. ISBN 978-0-300-05031-8.

[2] Crossley, John; W.-C. Lun, Anthony (1999). The Nine Chapters on the Mathematical Art: Companion and Commentary. Oxford University Press. p. 213. ISBN 978-0-19-853936-0.

[3] Smyly, J. Gilbart (1920). “Heron’s Formula for Cube Root”. Hermathena (Trinity College Dublin) 19 (42): 64–67.

[4] Aryabhatiya Marathi: आय셍भटीय, Mohan Apte, Pune, India, Rajhans Publications, 2009, p.62, ISBN 978-81-7434-480-9

14.8 External links

• Cube root calculator reduces any number to simplest radical form

• Computing the Cube Root, K. Turkowski, Apple Technical Report #KT-32, 1998. Includes C source code. • Cube root at PlanetMath.org.

• Weisstein, Eric W., “Cube Root”, MathWorld. Chapter 15

Cubic function

This article is about cubic equations in one variable. For cubic equations in two variables, see cubic plane curve. In mathematics, a cubic function is a function of the form

Graph of a cubic function with 3 real roots (where the curve crosses the horizontal axis—where y = 0). The case shown has two critical points. Here the function is ƒ(x) = (x3 + 3x2 − 6x − 8) / 4.

64 15.1. HISTORY 65

f(x) = ax3 + bx2 + cx + d,

where a is nonzero. In other words, a cubic function is defined by a polynomial of degree three. Setting ƒ(x) = 0 produces a of the form:

ax3 + bx2 + cx + d = 0.

Usually, the coefficients a, b, c, d are real numbers. However much of the theory of cubic equations for real coefficients applies to other types of coefficients (such as complex ones).[1] Solving the cubic equation is equivalent to finding the particular value (or values) of x for which ƒ(x) = 0. There are various methods to solve cubic equations. The solutions, also called roots, of a cubic equation can always be found algebraically. (This is also true of a quadratic or quartic (fourth degree) equation, but no higher-degree equation, by the Abel–Ruffini theorem). The roots can also be found trigonometrically. Alternatively, one can find a numerical approximation of the roots in the field of the real or complex numbers such as by using root-finding algorithms like Newton’s method.

15.1 History

Cubic equations were known to the ancient Babylonians, Greeks, Chinese, Indians, and Egyptians.[2][3][4] Babylonian (20th to 16th centuries BC) cuneiform tablets have been found with tables for calculating cubes and cube roots.[5][6] The Babylonians could have used the tables to solve cubic equations, but no evidence exists to confirm that they did.[7] The problem of doubling the cube involves the simplest and oldest studied cubic equation, and one for which the ancient Egyptians did not believe a solution existed.[8] In the 5th century BC, Hippocrates reduced this problem to that of finding two mean proportionals between one line and another of twice its length, but could not solve this with a compass and straightedge construction,[9] a task which is now known to be impossible. Methods for solving cubic equations appear in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BC and commented on by Liu Hui in the 3rd century.[3] In the 3rd century, the ancient Greek mathematician Diophantus found integer or rational solutions for some bivariate cubic equations (Diophantine equations).[4][10] Hippocrates, Menaechmus and Archimedes are believed to have come close to solving the problem of doubling the cube using intersecting conic sections,[9] though historians such as Reviel Netz dispute whether the Greeks were thinking about cubic equations or just problems that can lead to cubic equations. Some others like T. L. Heath, who translated all Archimedes’ works, disagree, putting forward evidence that Archimedes really solved cubic equations using intersections of two cones, but also discussed the conditions where the roots are 0, 1 or 2.[11] In the 7th century, the Tang dynasty astronomer mathematician Wang Xiaotong in his mathematical treatise titled Jigu Suanjing systematically established and solved 25 cubic equations of the form x3 + px2 + qx = N , 23 of them with p, q ≠ 0 , and two of them with q = 0 .[12] In the 11th century, the Persian poet-mathematician, Omar Khayyám (1048–1131), made significant progress in the theory of cubic equations. In an early paper he wrote regarding cubic equations, he discovered that a cubic equation can have more than one solution and stated that it cannot be solved using compass and straightedge constructions. He also found a geometric solution.[13][14] In his later work, the Treatise on Demonstration of Problems of Algebra, he wrote a complete classification of cubic equations with general geometric solutions found by means of intersecting conic sections.[15][16] In the 12th century, the Indian mathematician Bhaskara II attempted the solution of cubic equations without general success. However, he gave one example of a cubic equation:[17]

x3 + 12x = 6x2 + 35

In the 12th century, another Persian mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), wrote the Al-Mu'adalat (Treatise on Equations), which dealt with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He used what would later be known as the "Ruffini-Horner method” to numerically approximate the root of a cubic equation. He also developed the concepts of a derivative function and 66 CHAPTER 15. CUBIC FUNCTION y

2

1

x −1 0 1 2

−1

−2

f(x)=2x3− 3x2− 3x+2

Two-dimensional graph of a cubic, the polynomial ƒ(x) = 2x3 − 3x2 − 3x + 2. 15.1. HISTORY 67

the maxima and minima of curves in order to solve cubic equations which may not have positive solutions.[18] He understood the importance of the discriminant of the cubic equation to find algebraic solutions to certain types of cubic equations.[19] Leonardo de Pisa, also known as Fibonacci (1170–1250), was able to find the positive solution to the cubic equation x3 + 2x2 + 10x = 20, using the Babylonian numerals. He gave the result as 1,22,7,42,33,4,40 (equivalent to 1 + 22/60 + 7/602 + 42/603 + 33/604 + 4/605 + 40/606),[20] which differs from the correct value by only about three trillionths. In the early 16th century, the Italian mathematician Scipione del Ferro (1465–1526) found a method for solving a class of cubic equations, namely those of the form x3 + mx = n. In fact, all cubic equations can be reduced to this form if we allow m and n to be negative, but negative numbers were not known to him at that time. Del Ferro kept his achievement secret until just before his death, when he told his student Antonio Fiore about it.

Niccolò Fontana Tartaglia

In 1530, Niccolò Tartaglia (1500–1557) received two problems in cubic equations from Zuanne da Coi and announced that he could solve them. He was soon challenged by Fiore, which led to a famous contest between the two. Each 68 CHAPTER 15. CUBIC FUNCTION

contestant had to put up a certain amount of money and to propose a number of problems for his rival to solve. Whoever solved more problems within 30 days would get all the money. Tartaglia received questions in the form x3 + mx = n, for which he had worked out a general method. Fiore received questions in the form x3 + mx2 = n, which proved to be too difficult for him to solve, and Tartaglia won the contest. Later, Tartaglia was persuaded by Gerolamo Cardano (1501–1576) to reveal his secret for solving cubic equations. In 1539, Tartaglia did so only on the condition that Cardano would never reveal it and that if he did write a book about cubics, he would give Tartaglia time to publish. Some years later, Cardano learned about Ferro’s prior work and published Ferro’s method in his book Ars Magna in 1545, meaning Cardano gave Tartaglia 6 years to publish his results (with credit given to Tartaglia for an independent solution). Cardano’s promise with Tartaglia stated that he not publish Tartaglia’s work, and Cardano felt he was publishing del Ferro’s, so as to get around the promise. Nevertheless, this led to a challenge to Cardano by Tartaglia, which Cardano denied. The challenge was eventually accepted by Cardano’s student Lodovico Ferrari (1522–1565). Ferrari did better than Tartaglia in the competition, and Tartaglia lost both his prestige and income.[21] Cardano noticed that Tartaglia’s method sometimes required him to extract the square root of a negative number. He even included a calculation with these complex numbers in Ars Magna, but he did not really understand it. Rafael Bombelli studied this issue in detail and is therefore often considered as the discoverer of complex numbers. François Viète (1540–1603) independently derived the trigonometric solution for the cubic with three real roots, and René Descartes (1596–1650) extended the work of Viète.[22]

15.2 Critical points of a cubic function

The critical points of a cubic equation are those values of x where the slope of the cubic function is zero. They are found by setting derivative of the cubic equation equal to zero obtaining: f ′(x) = 3ax2 + 2bx + c = 0. The solutions of that equation are the critical points of the cubic equation and are given by: (using the quadratic formula)

√ −b  b2 − 3ac x = . 3a If b2 − 3ac > 0, then the cubic function has a local maximum and a local minimum. If b2 − 3ac = 0, then the cubic’s inflection point is the only critical point. If b2 − 3ac < 0, then there are no critical points. In the cases where b2 − 3ac ≤ 0, the cubic function is strictly monotonic.

15.3 Roots of a cubic function

The general cubic equation has the form

ax3 + bx2 + cx + d = 0 (1)

with a ≠ 0 . This section describes how the roots of such an equation may be computed. The coefficients a, b, c, d are generally assumed to be real numbers, but most of the results apply when they belong to any field of characteristic not 2 or 3.

15.3.1 The nature of the roots

Every cubic equation (1) with real coefficients has at least one solution x among the real numbers; this is a consequence of the intermediate value theorem. We can distinguish several possible cases using the discriminant,

∆ = 18abcd − 4b3d + b2c2 − 4ac3 − 27a2d2.

The following cases need to be considered: [23] 15.3. ROOTS OF A CUBIC FUNCTION 69

turning point, stationary point & local maximum (-6, 972) y

tangent at inflection point: y = -147x + 433

f(x) = x³ - 3x² - 144x + 432

falling inflection point (1, 286) f'(x) = 3x² - 6x - 144

root (-6)root (1) root (8) root (-12) 0 root (3) root (12) x f''(x) = 6x - 6 turning point, stationary point & global minimum (1, -147)

turning point, stationary point & local minimum (8, -400) f(x) curve concave (downwards)f(x) convex (downwards)

The roots, turning points, stationary points, inflection point and concavity of a cubic polynomial x³ - 3x² - 144x + 432 (black line) and its first and second derivatives (red and blue).

• If Δ > 0, then the equation has three distinct real roots.

• If Δ = 0, then the equation has a multiple root and all its roots are real.

• If Δ < 0, then the equation has one real root and two nonreal complex conjugate roots.

For information about the location in the complex plane of the roots of a polynomial of any degree, including degree three, see Properties of polynomial roots and Routh–Hurwitz stability criterion 70 CHAPTER 15. CUBIC FUNCTION

15.3.2 General formula for roots

For the general cubic equation

ax3 + bx2 + cx + d = 0 the general formula for the roots, in terms of the coefficients, is as follows:[24]

( ) 1 ∆0 xk = − b + ukC + , k ∈ {1, 2, 3} 3a ukC where

√ √ −1 + i 3 −1 − i 3 u = 1 , u = , u = 1 2 2 3 2 are the three cube roots of unity, and where √ √ 3 2− 3 ∆1+ ∆1 4∆0 C = 2 . (see below for special cases) with

2 ∆0 = b − 3ac 3 2 ∆1 = 2b − 9abc + 27a d and

2 − 3 − 2 ∆1 4∆0 = 27 a ∆ , where ∆ is the discriminant discussed above. √ √ In these formulae, and 3 denote any choice for the square or cube roots. Changing of choice for the square root amounts to exchanging x2 and x3 . Changing of choice for the cube root amounts to circularly permuting the roots. Thus the freeness of choosing a determination of the square or cube roots corresponds exactly to the freeness for numbering the roots of the equation. Four centuries ago, Gerolamo Cardano proposed a similar formula (see below), which still appears in many textbooks:

1 ( ) x = − b + u C +u ¯ C¯ k 3a k k where

√ √ 2 3 3 ∆ − ∆ − 4∆ C¯ = 1 1 0 2

and u¯k is the complex conjugate of uk (note that CC¯ = ∆0 ). However, this formula is applicable without further explanation only when a, b, c, d are real numbers and the operand 2 − 3 of the square root, i.e., ∆1 4∆0 , is non-negative. When this operand is real and non-negative, the square root refers to the principal (positive) square root and the cube roots in the formula are to be interpreted as the real ones. Otherwise, there is no real square root and one can arbitrarily choose one of the imaginary square roots (the same one everywhere in the solution). For extracting the complex cube roots of the resulting complex expression, we have also to choose among three cube roots in each part of each solution, giving nine possible combinations of one of three cube roots for the first part of the expression and one of three for the second. The correct combination is such that the two cube roots chosen for the two terms in a given solution expression are complex conjugates of each other (whereby the two imaginary terms in each solution cancel out). The next sections describe how these formulas may be obtained. 15.3. ROOTS OF A CUBIC FUNCTION 71

Special cases √ √ ̸ 2 − 3 2 ̸ √If ∆ = 0 and ∆0 = 0, the sign of ∆1 4∆0 = ∆1 has to be chosen to have C = 0, that is one should define 2 ∆1 = ∆1, whichever is the sign of ∆1.

If ∆ = 0 and ∆0 = 0, the three roots are equal:

b x = x = x = − . 1 2 3 3a

If ∆ = 0 and ∆0 ≠ 0, the above expression for the roots is correct but misleading, hiding the fact that no radical is needed to represent the roots. In fact, in this case, there is a double root,

9ad − bc x1 = x2 = , 2∆0 and a simple root

4abc − 9a2d − b3 x3 = . a∆0

15.3.3 Reduction to a depressed cubic

− b Dividing Equation (1) by a and substituting t 3a for x (the Tschirnhaus transformation) we get the equation

t3 + pt + q = 0 (2)

where

3ac − b2 p = 3a2 2b3 − 9abc + 27a2d q = . 27a3 The left hand side of equation (2) is a monic trinomial called a depressed cubic. Any formula for the roots of a depressed cubic may be transformed into a formula for the roots of Equation (1) by − b substituting the above values for p and q and using the relation x = t 3a . Therefore, only Equation (2) is considered in the following.

15.3.4 Cardano’s method

The solutions can be found with the following method due to Scipione del Ferro and Tartaglia, published by Gerolamo Cardano in 1545.[25] This method applies to the depressed cubic

t3 + pt + q = 0 . (2)

We introduce two variables u and v linked by the condition u + v = t 72 CHAPTER 15. CUBIC FUNCTION

and substitute this in the depressed cubic (2), giving

u3 + v3 + (3uv + p)(u + v) + q = 0 (3)

At this point Cardano imposed a second condition for the variables u and v:

3uv + p = 0

As the first parenthesis vanishes in (3), we get u3 + v3 = −q and u3v3 = −p3/27 . Thus u3 and v3 are the two roots of the equation

p3 z2 + qz − = 0 . 27 At this point, Cardano, who did not know complex numbers, supposed that the roots of this equation were real, that q2 p3 is that 4 + 27 > 0 . Solving this equation and using the fact that u and v may be exchanged, we find √ √ 3 − q q2 p3 3 − q − q2 p3 u = 2 + 4 + 27 and v = 2 4 + 27 .

As these expressions are real, their cube roots are well-defined and, like Cardano, we get

√ √ √ √ 2 3 2 3 3 q q p 3 q q p t = u + v = − + + + − − + . 1 2 4 27 2 4 27

q2 p3 Given the assumption that 4 + 27 > 0 , Equation (2) also has two complex roots. These are obtained by considering the complex cube roots appearing in the above√ formula; the fact uv is real implies√ that one is obtained by multiplying −1 3 −1 − 3 the first of the above cube roots by 2 + i 2 and the second by 2 i 2 , and vice versa for the other one. q2 p3 3 If 4 + 27 is not necessarily positive, we have to choose a cube root of u . As there is no direct way to choose the 3 − p corresponding cube root of v , one has to use the relation v = 3u , which gives

√ √ 2 3 3 q q p u = − − + (4) 2 4 27 and

p t = u − . 3u Note that the sign of the square root does not affect the resulting t , because changing it amounts to exchanging u and v . We have chosen the minus sign to have u ≠ 0 when p = 0 and q ≠ 0 , in order to avoid a division by zero. With this choice, the above expression for t always works, except when p = q = 0 , where the second term becomes 0/0. In this case there is a triple root t = 0 . Note also that in several cases the solutions are expressed with fewer square or cube roots

If p = q = 0 then we have the triple real root

t = 0.

If p = 0 and q ≠ 0 then √ u = − 3 q and v = 0 15.3. ROOTS OF A CUBIC FUNCTION 73

and the three roots are the three cube roots of −q . If p ≠ 0 and q = 0 then √ √ p p u = and v = − , 3 3

in which case the three roots are √ √ p u ω1p t = u + v = 0, t = ω1u − = −p, t = − = − −p, 3ω1u ω1 3u

where √ 2π i 3 − 1 3 ω1 = e = 2 + 2 i.

Finally if 4p3 + 27q2 = 0 and p ≠ 0 , there are a double root and an isolated root which may be expressed rationally in terms of p and q , but these expressions may not be immediately deduced from the general expression of the roots:

3q 3q t = t = − and t = . 1 2 2p 3 p

b To pass from these roots of t in Equation (2) to the general formulas for roots of x in Equation (1), subtract 3a and replace p and q by their expressions in terms of a, b, c, d .

15.3.5 Vieta’s substitution

Starting from the depressed cubic

t3 + pt + q = 0, we make the following substitution, known as Vieta’s substitution:

p t = w − 3w

This results in the equation

p3 w3 + q − = 0. 27w3

Multiplying by w3, it becomes a sextic equation in w, which is in fact a quadratic equation in w3:

p3 w6 + qw3 − = 0 27

3 The quadratic formula allows this to be solved for w . If w1, w2 and w3 are the three cube roots of one of the solutions in w3, then the roots of the original depressed cubic are

p p p t1 = w1 − , t2 = w2 − and t3 = w3 − . 3w1 3w2 3w3 74 CHAPTER 15. CUBIC FUNCTION

15.3.6 Lagrange’s method

In his paper Réflexions sur la résolution algébrique des équations (“Thoughts on the algebraic solving of equations”), Joseph Louis Lagrange introduced a new method to solve equations of low degree. This method works well for cubic and quartic equations, but Lagrange did not succeed in applying it to a quintic equation, because it requires solving a resolvent polynomial of degree at least six.[26][27][28] This is explained by the Abel–Ruffini theorem, which proves that such polynomials cannot be solved by radicals. Nevertheless the modern methods for solving solvable quintic equations are mainly based on Lagrange’s method.[28] In the case of cubic equations, Lagrange’s method gives the same solution as Cardano’s. By drawing attention to a geometrical problem that involves two cubes of different size Cardano explains in his book Ars Magna how he arrived at the idea of considering the unknown of the cubic equation as a sum of two other quantities. Lagrange’s method may also be applied directly to the general cubic equation (1) without using the reduction to the depressed cubic equation (2). Nevertheless the computation is much easier with this reduced equation. √ − 1 3 Suppose that x0, x1 and x2 are the roots of equation (1) or (2), and define ζ = 2 + 2 i (a complex cube root of 1, i.e. a primitive third root of unity) which satisfies the relation ζ2 + ζ + 1 = 0 . We now set

s0 = x0 + x1 + x2,

2 s1 = x0 + ζx1 + ζ x2,

2 s2 = x0 + ζ x1 + ζx2.

This is the discrete Fourier transform of the roots: observe that while the coefficients of the polynomial are symmetric in the roots, in this formula an order has been chosen on the roots, so these are not symmetric in the roots. The roots may then be recovered from the three si by inverting the above linear transformation via the inverse discrete Fourier transform, giving

1 x0 = 3 (s0 + s1 + s2),

1 2 x1 = 3 (s0 + ζ s1 + ζs2),

1 2 x2 = 3 (s0 + ζs1 + ζ s2).

The polynomial s0 is an elementary symmetric polynomial and is thus equal to −b/a in case of Equation (1) and to zero in case of Equation (2), so we only need to seek values for the other two.

The polynomials s1 and s2 are not symmetric functions of the roots: s0 is invariant, while the two non-trivial cyclic 2 2 permutations of the roots send s1 to ζs1 and s2 to ζ s2 , or s1 to ζ s1 and s2 to ζs2 (depending on which permutation), while transposing x1 and x2 switches s1 and s2 ; other transpositions switch these roots and multiply them by a power of ζ. 3 3 3 Thus, s1 , s2 and s1s2 are left invariant by the cyclic permutations of the roots, which multiply them by ζ = 1 . Also 3 3 s1s2 and s1 + s2 are left invariant by the transposition of x1 and x2 which exchanges s1 and s2 . As the permutation 3 3 group S3 of the roots is generated by these permutations, it follows that s1 + s2 and s1s2 are symmetric functions of the roots and may thus be written as polynomials in the elementary symmetric polynomials and thus as rational 3 3 functions of the coefficients of the equation. Let s1 + s2 = A and s1s2 = B in these expressions, which will be explicitly computed below. 3 3 We have that s1 and s2 are the two roots of the quadratic equation

z2 − Az + B3 = 0 .

Thus the resolution of the equation may be finished exactly as described for Cardano’s method, with s1 and s2 in place of u and v . 15.3. ROOTS OF A CUBIC FUNCTION 75

Computation of A and B

Setting E1 = x0 +x1 +x2 , E2 = x0x1 +x1x2 +x2x0 and E3 = x0x1x2 , the elementary symmetric polynomials, we have, using that ζ3 = 1 :

3 3 3 3 2 2 2 2 2 2 2 s1 = x0 + x1 + x2 + 3ζ(x0x1 + x1x2 + x2x0) + 3ζ (x0x1 + x1x2 + x2x0) + 6x0x1x2 . 3 2 2 − The expression for s2 is the same with ζ and ζ exchanged. Thus, using ζ + ζ = 1 we get

3 3 3 3 3 − 2 2 2 2 2 2 A = s1 + s2 = 2(x0 + x1 + x2) 3(x0x1 + x1x2 + x2x0 + x0x1 + x1x2 + x2x0) + 12x0x1x2 , and a straightforward computation gives

3 3 3 − A = s1 + s2 = 2E1 9E1E2 + 27E3 . Similarly we have

2 2 2 2 2 − B = s1s2 = x0 + x1 + x2 + (ζ + ζ )(x0x1 + x1x2 + x2x0) = E1 3E2 . When solving Equation (1) we have

E1 = −b/a , E2 = c/a and E3 = −d/a

With Equation (2), we have E1 = 0 , E2 = p and E3 = −q and thus:

A = −27q and B = −3p .

1 − Note that with Equation (2), we have x0 = 3 (s1 + s2) and s1s2 = 3p , while in Cardano’s method we have set − 1 x0 = u + v and uv = 3 p . Thus we have, up to the exchange of u and v :

s1 = 3u and s2 = 3v .

In other words, in this case, Cardano’s and Lagrange’s method compute exactly the same things, up to a factor of three in the auxiliary variables, the main difference being that Lagrange’s method explains why these auxiliary variables appear in the problem.

15.3.7 Trigonometric (and hyperbolic) method

When a cubic equation has three real roots, the formulas expressing these roots in terms of radicals involve complex numbers. It has been proved that when none of the three real roots is rational—the casus irreducibilis— one cannot express the roots in terms of real radicals. Nevertheless, purely real expressions of the solutions may be obtained using hypergeometric functions,[29] or more elementarily in terms of trigonometric functions, specifically in terms of the cosine and arccosine functions. The formulas which follow, due to François Viète,[22] are true in general (except when p = 0), are purely real when the equation has three real roots, but involve complex cosines and arccosines when there is only one real root. Starting from Equation (2), t3 + pt + q = 0 , let us set t = u cos θ . The idea is to choose u to make Equation (2) coincide with the identity

4 cos3 θ − 3 cos θ − cos(3θ) = 0 . √ − p u3 In fact, choosing u = 2 3 and dividing Equation (2) by 4 we get 76 CHAPTER 15. CUBIC FUNCTION

√ 3q −3 4 cos3 θ − 3 cos θ − = 0 . 2p p Combining with the above identity, we get

√ 3q −3 cos(3θ) = 2p p

and thus the roots are[30]

√ ( ( √ ) ) p 1 3q −3 2πk t = 2 − cos arccos − for k = 0, 1, 2 . k 3 3 2p p 3

This formula involves only real terms if p < 0 and the argument of the arccosine is between −1 and 1. The last condition is equivalent to 4p3 + 27q2 ≤ 0 , which implies also p < 0 . Thus the above formula for the roots involves only real terms if and only if the three roots are real.

Denoting by C(p, q) the above value of t0, and using the inequality −π ≤ arccos(u) ≤ π for a real number u such that −1 ≤ u ≤ 1 , the three roots may also be expressed as

t0 = C(p, q), t2 = −C(p, −q), t1 = −t0 − t2 .

If the three roots are real, we have

t0 ≥ t1 ≥ t2 .

All these formulas may be straightforwardly transformed into formulas for the roots of the general cubic equation (1), using the back substitution described in Section Reduction to a depressed cubic. When there is only one real root (and p ≠ 0), it may be similarly represented using hyperbolic functions, as[31][32]

√ ( ( √ )) |q| p 1 −3|q| −3 t = −2 − cosh arcosh if 4p3 + 27q2 > 0 and p < 0 , 0 q 3 3 2p p √ ( ( √ )) p 1 3q 3 t = −2 sinh arsinh if p > 0 . 0 3 3 2p p If p ≠ 0 and the inequalities on the right are not satisfied the formulas remain valid but involve complex quantities. [33] When p = 3 , the above values of t0 are sometimes called the Chebyshev cube root. More precisely, the values involving cosines and hyperbolic cosines define, when p = −3 , the same analytic function denoted C 1 (q) , which is 3 the proper Chebyshev cube root. The value involving hyperbolic sines is similarly denoted S 1 (q), when p = 3 . 3

15.3.8 Factorization

If the cubic equation ax3 + bx2 + cx + d = 0 with integer coefficients has a rational real root, it can be found using the rational root test: If the root is r = m / n fully reduced, then m is a factor of d and n is a factor of a, so all possible combinations of values for m and n can be checked for whether they satisfy the cubic equation. The rational root test may also be used for a cubic equation with rational coefficients: by multiplication by the lowest common denominator) of the coefficients, one gets an equation with integer coefficients which has exactly the same roots. The rational root test is particularly useful when there are three real roots because the algebraic solution unhelpfully expresses the real roots in terms of complex entities. The rational root test is also helpful in the presence of one real 15.3. ROOTS OF A CUBIC FUNCTION 77

and two complex roots because it allows all of the roots to be written without the use of cube roots: If r is any root of the cubic, then we may factor out (x–r ) using polynomial long division to obtain

( ) (x − r) ax2 + (b + ar)x + c + br + ar2 = ax3 + bx2 + cx + d .

Hence if we know one root, perhaps from the rational root test, we can find the other two by using the quadratic formula to solve the quadratic ax2 + (b + ar)x + c + br + ar2 , giving

√ −b − ra  b2 − 4ac − 2abr − 3a2r2 2a for the other two roots.

15.3.9 Geometric interpretation of the roots

Three real roots

Viète’s trigonometric expression of the roots in the three-real-roots case lends itself to a geometric interpretation in terms of a circle.[22][34] When the cubic is written in depressed form t3 + pt + q = 0 , as shown above, the solution can be expressed as

√ ( ( √ ) ) p 1 3q −3 2π t = 2 − cos arccos − k for k = 0, 1, 2 . k 3 3 2p p 3 ( √ ) 3q −3 1 Here arccos 2p p is an angle in the unit circle; taking 3 of that angle corresponds to taking a cube root of a − 2π complex number;√ adding k 3 for k = 1, 2 finds the other cube roots; and multiplying the cosines of these resulting − p angles by 2 3 corrects for scale. For the non-depressed case x3 + bx2 + cx + d = 0 (shown in the accompanying graph), the depressed case as − b b indicated previously is obtained by defining t such that x = t 3 so t = x + 3 . Graphically this corresponds to simply shifting the graph horizontally when changing between the variables t and x, without changing the angle relationships. This shift moves the point of inflection and the centre of the circle onto the y-axis. Consequently, the roots of the equation in t sum to zero.

One real and two complex roots

In the Cartesian plane If a cubic is plotted in the Cartesian plane, the real root can be seen graphically as the horizontal intercept of the curve. But further,[35][36][37] if the complex conjugate roots are written as g  hi, then g is the abscissa (the positive or negative horizontal distance from the origin) of the tangency point of a line that is tangent to the cubic curve and intersects the horizontal axis at the same place as does the cubic curve; and |h| is the square root of the tangent of the angle between this line and the horizontal axis.

In the complex plane With one real and two complex roots, the three roots can be represented as points in the complex plane, as can the two roots of the cubic’s derivative. There is an interesting geometrical relationship among all these roots. The points in the complex plane representing the three roots serve as the vertices of an isosceles triangle. (The triangle is isosceles because one root is on the horizontal (real) axis and the other two roots, being complex conjugates, appear symmetrically above and below the real axis.) Marden’s Theorem says that the points representing the roots of the derivative of the cubic are the foci of the Steiner inellipse of the triangle—the unique ellipse that is tangent to the π triangle at the midpoints of its sides. If the angle at the vertex on the real axis is less than 3 then the major axis of π the ellipse lies on the real axis, as do its foci and hence the roots of the derivative. If that angle is greater than 3 , π the major axis is vertical and its foci, the roots of the derivative, are complex. And if that angle is 3 , the triangle is equilateral, the Steiner inellipse is simply the triangle’s incircle, its foci coincide with each other at the incenter, which lies on the real axis, and hence the derivative has duplicate real roots. 78 CHAPTER 15. CUBIC FUNCTION

Omar Khayyám’s solution

As shown in this graph, to solve the third-degree equation x3 + a2x = b where b > 0, Omar Khayyám constructed the parabola y = x2/a, the circle which has as a diameter the line segment [0, b/a2] of the positive x-axis, and a vertical line through the point above the x-axis, where the circle and parabola intersect. The solution is given by the length of the horizontal line segment from the origin to the intersection of the vertical line and the x-axis. A simple modern proof of the method is the following: multiplying by x the equation, and regrouping the terms gives

x4 b = x ( − x) . a2 a2 2 2 − b The left-hand side is the value of y on the parabola. The equation of the circle being y + x (x a2 ) = 0, the right hand side is the value of y2 on the circle.

15.4 Collinearities

The tangent lines to a cubic at three collinear points intercept the cubic again at collinear points.[38]:p. 425,#290

15.5 Applications

Cubic equations arise in various other contexts. Marden’s theorem states that the foci of the Steiner inellipse of any triangle can be found by using the cubic function whose roots are the coordinates in the complex plane of the triangle’s three vertices. The roots of the first derivative of this cubic are the complex coordinates of those foci. Given the cosine (or other trigonometric function) of an arbitrary angle, the cosine of one-third of that angle is one of the roots of a cubic. The solution of the general quartic equation relies on the solution of its resolvent cubic. In analytical chemistry, the Charlot equation, which can be used to find the pH of buffer solutions, can be solved using a cubic equation.

15.6 See also

• Algebraic equation • Linear equation • Newton’s method • Polynomial • Quadratic equation • Quartic equation • Quintic equation • Spline (mathematics)

15.7 Notes

[1] Exceptions include fields of characteristic 2 and 3.

[2] British Museum BM 85200 15.7. NOTES 79

[3] Crossley, John; W.-C. Lun, Anthony (1999). The Nine Chapters on the Mathematical Art: Companion and Commentary. Oxford University Press. p. 176. ISBN 978-0-19-853936-0.

[4] Van der Waerden, Geometry and Algebra of Ancient Civilizations, chapter 4, Zurich 1983 ISBN 0-387-12159-5

[5] Cooke, Roger (8 November 2012). The History of Mathematics. John Wiley & Sons. p. 63. ISBN 978-1-118-46029-0.

[6] Nemet-Nejat, Karen Rhea (1998). Daily Life in Ancient Mesopotamia. Greenwood Publishing Group. p. 306. ISBN 978-0-313-29497-6.

[7] Cooke, Roger (2008). Classical Algebra: Its Nature, Origins, and Uses. John Wiley & Sons. p. 64. ISBN 978-0-470- 27797-3.

[8] Guilbeau (1930, p. 8) states that “the Egyptians considered the solution impossible, but the Greeks came nearer to a solution.”

[9] Guilbeau (1930, pp. 8–9)

[10] Heath, Thomas L. (April 30, 2009). Diophantus of Alexandria: A Study in the History of Greek Algebra. Martino Pub. pp. 87–91. ISBN 978-1578987542.

[11] Archimedes (October 8, 2007). The works of Archimedes. Translation by T. L. Heath. Rough Draft Printing. ISBN 978-1603860512.

[12] Mikami, Yoshio (1974) [1913], “Chapter 8 Wang Hsiao-Tung and Cubic Equations”, The Development of Mathematics in China and Japan (2nd ed.), New York: Chelsea Publishing Co., pp. 53–56, ISBN 978-0-8284-0149-4

[13] A paper of Omar Khayyam, Scripta Math. 26 (1963), pages 323–337

[14] In O'Connor, John J.; Robertson, Edmund F., “Omar Khayyam”, MacTutor History of Mathematics archive, University of St Andrews. one may read This problem in turn led Khayyam to solve the cubic equation x3 + 200x = 20x2 + 2000 and he found a positive root of this cubic by considering the intersection of a rectangular hyperbola and a circle. An approximate numerical solution was then found by interpolation in trigonometric tables. The then in the last assertion is erroneous and should, at least, be replaced by also. The geometric construction was perfectly suitable for Omar Khayyam, as it occurs for solving a problem of geometric construction. At the end of his article he says only that, for this geometrical problem, if approximations are sufficient, then a simpler solution may be obtained by consulting trigonometric tables. Textually: If the seeker is satisfied with an estimate, it is up to him to look into the table of chords of Almagest, or the table of sines and versed sines of Mothmed Observatory. This is followed by a short description of this alternate method (seven lines).

[15] J. J. O'Connor and E. F. Robertson (1999), Omar Khayyam, MacTutor History of Mathematics archive, states, “Khayyam himself seems to have been the first to conceive a general theory of cubic equations.”

[16] Guilbeau (1930, p. 9) states, “Omar Al Hay of Chorassan, about 1079 AD did most to elevate to a method the solution of the algebraic equations by intersecting conics.”

[17] Datta and Singh, History of Hindu Mathematics, p. 76,Equation of Higher Degree; Bharattya Kala Prakashan, Delhi, India 2004 ISBN 81-86050-86-8

[18] O'Connor, John J.; Robertson, Edmund F., “Sharaf al-Din al-Muzaffar al-Tusi”, MacTutor History of Mathematics archive, University of St Andrews.

[19] Berggren, J. L. (1990), “Innovation and Tradition in Sharaf al-Din al-Tusi’s Muadalat”, Journal of the American Oriental Society 110 (2): 304–309, doi:10.2307/604533

[20] R. N. Knott and the Plus Team (November 4, 2013), “The life and numbers of Fibonacci”, Plus Magazine

[21] Katz, Victor (2004). A History of Mathematics. Boston: Addison Wesley. p. 220. ISBN 9780321016188.

[22] Nickalls, R. W. D. (July 2006), “Viète, Descartes and the cubic equation” (PDF), Mathematical Gazette 90: 203–208

[23] Irving, Ronald S. (2004), Integers, polynomials, and rings, Springer-Verlag New York, Inc., ISBN 0-387-40397-3, Chapter 10 ex 10.14.4 and 10.17.4, pp. 154–156

[24] Press, William H.; Vetterling, William T. (1992). Numerical Recipes in Fortran 77: The Art of Scientific Computing. Cambridge University Press. p. 179. ISBN 0-521-43064-X., Extract of page 179

[25] Jacobson 2009, p. 210

[26] Prasolov, Viktor; Solovyev, Yuri (1997), Elliptic functions and elliptic integrals, AMS Bookstore, ISBN 978-0-8218-0587- 9, §6.2, p. 134 80 CHAPTER 15. CUBIC FUNCTION

[27] Kline, Morris (1990), Mathematical Thought from Ancient to Modern Times, Oxford University Press US, ISBN 978-0-19- 506136-9, Algebra in the Eighteenth Century: The Theory of Equations

[28] Daniel Lazard, “Solving quintics in radicals”, in Olav Arnfinn Laudal, Ragni Piene, The Legacy of Niels Henrik Abel, pp. 207–225, Berlin, 2004,. ISBN 3-540-43826-2

[29] Zucker, I. J., “The cubic equation — a new look at the irreducible case”, Mathematical Gazette 92, July 2008, 264–268.

[30] Shelbey, Samuel (1975), CRC Standard Mathematical Tables, CRC Press, ISBN 0-87819-622-6

[31] These are Formulas (80) and (83) of Weisstein, Eric W. 'Cubic Formula'. From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/CubicFormula.html, rewritten for having a coherent notation.

[32] Holmes, G. C., “The use of hyperbolic cosines in solving cubic polynomials”, Mathematical Gazette 86. November 2002, 473–477.

[33] Abramowitz, Milton; Stegun, Irene A., eds. Handbook of Mathematical Functions with Formulas, Graphs, and Mathemat- ical Tables, Dover (1965), chap. 22 p. 773

[34] Nickalls, R. W. D. (November 1993), “A new approach to solving the cubic: Cardan’s solution revealed” (PDF), The Mathematical Gazette 77 (480): 354–359, doi:10.2307/3619777, ISSN 0025-5572, JSTOR 3619777 See esp. Fig. 2.

[35] Henriquez, Garcia (June–July 1935), “The graphical interpretation of the complex roots of cubic equations”, American Mathematical Monthly 42 (6): 383–384, doi:10.2307/2301359

[36] Barr, C. F. (1918), American Mathematical Monthly 25: 268, doi:10.2307/2972885 Missing or empty |title= (help)

[37] Barr, C. F. (1917), Annals of Mathematics 19: 157 Missing or empty |title= (help)

[38] Whitworth, William Allen. Trilinear Coordinates and Other Methods of Modern Analytical Geometry of Two Dimen- sions, Forgotten Books, 2012 (orig. Deighton, Bell, and Co., 1866). http://www.forgottenbooks.com/search?q=Trilinear+ coordinates&t=books

15.8 References

• Anglin, W. S.; Lambek, Joachim (1995), “Mathematics in the Renaissance”, The Heritage of Thales, Springers, pp. 125–131, ISBN 978-0-387-94544-6 Ch. 24.

• Dence, T. (November 1997), “Cubics, chaos and Newton’s method”, Mathematical Gazette (Mathematical Association) 81: 403–408, doi:10.2307/3619617, ISSN 0025-5572

• Dunnett, R. (November 1994), “Newton–Raphson and the cubic”, Mathematical Gazette (Mathematical Asso- ciation) 78: 347–348, doi:10.2307/3620218, ISSN 0025-5572

• Guilbeau, Lucye (1930), “The History of the Solution of the Cubic Equation”, Mathematics News Letter 5 (4): 8–12, doi:10.2307/3027812, JSTOR 3027812

• Jacobson, Nathan (2009), Basic algebra 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1

• Mitchell, D. W. (November 2007), “Solving cubics by solving triangles”, Mathematical Gazette (Mathematical Association) 91: 514–516, ISSN 0025-5572

• Mitchell, D. W. (November 2009), “Powers of φ as roots of cubics”, Mathematical Gazette (Mathematical Association) 93: ???, ISSN 0025-5572

• Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), “Section 5.6 Quadratic and Cubic Equa- tions”, Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8

• Rechtschaffen, Edgar (July 2008), “Real roots of cubics: Explicit formula for quasi-solutions”, Mathematical Gazette (Mathematical Association) 92: 268–276, ISSN 0025-5572

• Zucker, I. J. (July 2008), “The cubic equation – a new look at the irreducible case”, Mathematical Gazette (Mathematical Association) 92: 264–268, ISSN 0025-5572 15.9. EXTERNAL LINKS 81

15.9 External links

• sums and products of roots

• Hazewinkel, Michiel, ed. (2001), “Cardano formula”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4

• Solving a Cubic by means of Moebius transforms

• Interesting derivation of trigonometric cubic solution with 3 real roots • Calculator for solving Cubics (also solves Quartics and Quadratics)

• Tartaglia’s work (and poetry) on the solution of the Cubic Equation at Convergence • Cubic Equation Solver.

• Quadratic, cubic and quartic equations on MacTutor archive. • Cubic Formula at PlanetMath.org.

• Cardano solution calculator as java applet at some local site. Only takes natural coefficients. • Graphic explorer for cubic functions With interactive animation, slider controls for coefficients

• On Solution of Cubic Equations at Holistic Numerical Methods Institute • Dave Auckly, Solving the quartic with a pencil American Math Monthly 114:1 (2007) 29—39

• “Cubic Equation” by Eric W. Weisstein, The Wolfram Demonstrations Project, 2007. • The Cubic Tutorials by John H. Mathews 82 CHAPTER 15. CUBIC FUNCTION

y

C A θ θ θ

B

x

For the cubic x3 + bx2 + cx + d = 0 with three real roots, the roots are the projection on the x-axis of the vertices A, B, and C of an equilateral triangle. The center of the triangle has the same abscissa as the inflection point. 15.9. EXTERNAL LINKS 83

y

M O R x

H A D C B E

√ The slope of line RA is twice that of RH. Denoting the complex roots of the cubic as g±hi, g = OM (negative here) and h = tan ORH √ = RH line of slope = BE = DA . 84 CHAPTER 15. CUBIC FUNCTION

Omar Khayyám’s geometric solution of a cubic equation, for the case a=2, b=16, giving the root 2. The fact that the vertical line intersects the x-axis at the center of the circle is specific to this particular example Chapter 16

Difference of two squares

In mathematics, the difference of two squares is a squared (multiplied by itself) number subtracted from another squared number. Every difference of squares may be factored according to the identity a2 − b2 = (a + b)(a − b) in elementary algebra.

16.1 Proof

The proof of the factorization identity is straightforward. Starting from the left-hand side, apply the distributive law to get

(a + b)(a − b) = a2 + ba − ab − b2 and set ba − ab = 0 as an application of the commutative law. The resulting identity is one of the most commonly used in mathematics. Among many uses, it gives a simple proof of the AM–GM inequality in two variables. The proof just given indicates the scope of the identity in abstract algebra: it will hold in any commutative ring R. Conversely, if this identity holds in a ring R for all pairs of elements a and b of the ring, then R is commutative. To see this, apply the distributive law to the right-hand side of the original equation and get a2 + ba − ab − b2 and for this to be equal to a2 − b2 , we must have ba − ab = 0 for all pairs a, b of elements of R, so the ring R is commutative.

16.2 Geometrical demonstrations

The difference of two squares can also be illustrated geometrically as the difference of two square areas in a plane. In the diagram, the shaded part represents the difference between the areas of the two squares, i.e. a2 − b2 . The

85 86 CHAPTER 16. DIFFERENCE OF TWO SQUARES

b a

a-b

area of the shaded part can be found by adding the areas of the two rectangles; a(a − b) + b(a − b) , which can be factorized to (a + b)(a − b) . Therefore a2 − b2 = (a + b)(a − b) Another geometric proof proceeds as follows: We start with the figure shown in the first diagram below, a large square with a smaller square removed from it. The side of the entire square is a, and the side of the small removed square is b. The area of the shaded region is a2 − b2 . A cut is made, splitting the region into two rectangular pieces, as shown in the second diagram. The larger piece, at the top, has width a and height a-b. The smaller piece, at the bottom, has width a-b and height b. Now the smaller piece can be detached, rotated, and placed to the right of the larger piece. In this new arrangement, shown in the last diagram below, the two pieces together form a rectangle, whose width is a + b and whose height is a − b . This rectangle’s area is (a + b)(a − b) . Since this rectangle came from rearranging the original figure, it must have the same area as the original figure. Therefore, a2 −b2 = (a+b)(a−b) . 16.3. USES 87

16.3 Uses

16.3.1 Factorisation of polynomials

The formula for the difference of two squares can be used for factoring polynomials that contain the square of a first quantity minus the square of a second quantity. For example, the polynomial x4 − 1 can be factored as follows: x4 − 1 = (x2 + 1)(x2 − 1) = (x2 + 1)(x + 1)(x − 1)

As a second example, the first two terms of x2 − y2 + x − y can be factored as (x + y)(x − y) , so we have: x2 − y2 + x − y = (x + y)(x − y) + x − y = (x − y)(x + y + 1)

16.3.2 Complex number case: sum of two squares

The difference of two squares is used to find the linear factors of the sum of two squares, using complex number coefficients. For example, the root of z2 + 5 can be found using difference of two squares: z2 + 5

= z2 − i2 · 5 √ = z2 − (i 5)2 √ √ = (z + i 5)(z − i 5) √ √ Therefore the linear factors are (z + i 5) and (z − i 5) . Since the two factors found by this method are Complex conjugates, we can use this in reverse as a method of multiplying a complex number to get a real number. This is used to get real denominators in complex fractions.[1]

16.3.3 Rationalising denominators

The difference of two squares can also be used in the rationalising of irrational denominators.[2] This is a method for removing surds from expressions (or at least moving them), applying to division by some combinations involving square roots. 5 For example: The denominator of √ can be rationalised as follows: 3 + 4 88 CHAPTER 16. DIFFERENCE OF TWO SQUARES

√ 5 3 + 4 √ 5 3 − 4 = √ × √ 3 + 4 3 − 4 √ 5( 3 − 4) = √ √ ( 3 + 4)( 3 − 4) √ 5( 3 − 4) = √ 2 3 − 42 √ 5( 3 − 4) = 3 − 16 √ 5( 3 − 4) = − . 13 √ Here, the irrational denominator 3 + 4 has been rationalised to 13 .

16.3.4 Mental arithmetic

Main article: Multiplication algorithm § Quarter square multiplication

The difference of two squares can also be used as an arithmetical short cut. If you are multiplying two numbers whose average is a number which is easily squared the difference of two squares can be used to give you the product of the original two numbers. For example:

27 × 33 = (30 − 3)(30 + 3)

Which means using the difference of two squares 27 × 33 can be restated as

a2 − b2 which is 302 − 32 = 891 .

16.3.5 Difference of two perfect squares

The difference of two consecutive perfect squares is the sum of the two bases n and n+1. This can be seen as follows:

(n + 1)2 − n2 = ((n + 1) + n)((n + 1) − n) = 2n + 1

Therefore the difference of two consecutive perfect squares is an odd number. Similarly, the difference of two arbitrary perfect squares is calculated as follows:

(n + k)2 − n2 = ((n + k) + n)((n + k) − n) = k(2n + k)

Therefore the difference of two even perfect squares is a multiple of 4 and the difference of two odd perfect squares is a multiple of 8. 16.4. GENERALIZATIONS 89

Vectors a (purple), b (cyan) and a + b (blue) are shown with arrows

16.4 Generalizations

The identity also holds in inner product spaces over the field of real numbers, such as for dot product of Euclidean vectors:

a · a − b · b = (a + b) · (a − b)

The proof is identical. By the way, assuming that a and b have equal norms (which means that their dot squares are equal), it demonstrates analytically the fact that two diagonals of a rhombus are perpendicular.

16.4.1 Difference of two nth powers ∑ n − n − n−1 n−1−k k If a and b are two elements of a commutative ring R, then a b = (a b)( k=0 a b ) . Note that binomial coefficients do not appear in the second factor, and the summation stops at n−1, not n.

16.5 See also

• Congruum, the shared difference of three squares in arithmetic progression

• Conjugate (algebra)

• Factorization 90 CHAPTER 16. DIFFERENCE OF TWO SQUARES

16.6 Notes

[1] Complex or imaginary numbers TheMathPage.com, retrieved 22 December 2011

[2] Multiplying Radicals TheMathPage.com, retrieved 22 December 2011

16.7 References

• James Stuart Stanton: Encyclopedia of Mathematics. Infobase Publishing, 2005, ISBN 9780816051243, p. 131 (online copy) • Alan S. Tussy, Roy David Gustafson: Elementary Algebra, 5th ed.. Cengage Learning, 2011, ISBN 9781111567668, pp. 467 - 469 (online copy)

16.8 External links

• difference of two squares at mathpages.com Chapter 17

Distributive property

“Distributivity” redirects here. It is not to be confused with Distributivism.

In abstract algebra and formal logic, the distributive property of binary operations generalizes the distributive law from elementary algebra. In propositional logic, distribution refers to two valid rules of replacement. The rules allow one to reformulate conjunctions and disjunctions within logical proofs. For example, in arithmetic:

2 ⋅ (1 + 3) = (2 ⋅ 1) + (2 ⋅ 3), but 2 / (1 + 3) ≠ (2 / 1) + (2 / 3).

In the left-hand side of the first equation, the 2 multiplies the sum of 1 and 3; on the right-hand side, it multiplies the 1 and the 3 individually, with the products added afterwards. Because these give the same final answer (8), it is said that multiplication by 2 distributes over addition of 1 and 3. Since one could have put any real numbers in place of 2, 1, and 3 above, and still have obtained a true equation, we say that multiplication of real numbers distributes over addition of real numbers.

17.1 Definition

Given a set S and two binary operators ∗ and + on S, we say that the operation ∗

• is left-distributive over + if, given any elements x, y, and z of S,

x ∗ (y + z) = (x ∗ y) + (x ∗ z)

• is right-distributive over + if, given any elements x, y, and z of S:

(y + z) ∗ x = (y ∗ x) + (z ∗ x)

[1] • is distributive over + if it is left- and right-distributive.

Notice that when ∗ is commutative, the three conditions above are logically equivalent.

17.2 Meaning

The operators used for examples in this section are the binary operations of addition ( + ) and multiplication ( · ) of numbers. There is a distinction between left-distributivity and right-distributivity:

91 92 CHAPTER 17. DISTRIBUTIVE PROPERTY

a · (b  c) = a · b  a · c (left-distributive) (a  b) · c = a · c  b · c (right-distributive)

In either case, the distributive property can be described in words as: To multiply a sum (or difference) by a factor, each summand (or minuend and subtrahend) is multiplied by this factor and the resulting products are added (or subtracted). If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa. One example of an operation that is “only” right-distributive is division, which is not commutative:

(a  b) ÷ c = a ÷ c  b ÷ c

In this case, left-distributivity does not apply:

a ÷ (b  c) ≠ a ÷ b  a ÷ c

The distributive laws are among the axioms for rings and fields. Examples of structures in which two operations are mutually related to each other by the distributive law are Boolean algebras such as the algebra of sets or the switching algebra. There are also combinations of operations that are not mutually distributive over each other; for example, addition is not distributive over multiplication. Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sums (keeping track of signs), and then adding up all of the resulting products.

17.3 Examples

17.3.1 Real numbers

In the following examples, the use of the distributive law on the set of real numbers R is illustrated. When multi- plication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form a field, which ensures the validity of the distributive law.

First example (mental and written multiplication)

During mental arithmetic, distributivity is often used unconsciously:

6 · 16 = 6 · (10 + 6) = 6 · 10 + 6 · 6 = 60 + 36 = 96

Thus, to calculate 6 ⋅ 16 in your head, you first multiply 6 ⋅ 10 and 6 ⋅ 6 and add the intermediate results. Written multiplication is also based on the distributive law.

Second example (with variables)

3a2b · (4a − 5b) = 3a2b · 4a − 3a2b · 5b = 12a3b − 15a2b2

Third example (with two sums)

(a + b) · (a − b) = a · (a − b) + b · (a − b) = a2 − ab + ba − b2 = a2 − b2 = (a + b) · a − (a + b) · b = a2 + ba − ab − b2 = a2 − b2

Here the distributive law was applied twice and. It does not matter which bracket is first multiplied out. 17.4. PROPOSITIONAL LOGIC 93

Fourth Example Here the distributive law is applied the other way around compared to the previous examples. Consider 12a3b2 − 30a4bc + 18a2b3c2 .

Since the factor 6a2b occurs in all summand, it can be factored out. That is, due to the distributive law one obtains 12a3b2 − 30a4bc + 18a2b3c2 = 6a2b(2ab − 5a2c + 3b2c2) .

17.3.2 Matrices

The distributive law is valid for matrix multiplication. More precisely,

(A + B) · C = A · C + B · C for all l × m -matrices A, B and m × n -matrices C , as well as

A · (B + C) = A · B + A · C for all l × m -matrices A and m × n -matrices B,C . Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws.

17.3.3 Other examples

1. Multiplication of ordinal numbers, in contrast, is only left-distributive, not right-distributive. 2. The cross product is left- and right-distributive over vector addition, though not commutative. 3. The union of sets is distributive over intersection, and intersection is distributive over union. 4. Logical disjunction (“or”) is distributive over logical conjunction (“and”), and conjunction is distributive over disjunction. 5. For real numbers (and for any totally ordered set), the maximum operation is distributive over the minimum operation, and vice versa: max(a, min(b, c)) = min(max(a, b), max(a, c)) and min(a, max(b, c)) = max(min(a, b), min(a, c)). 6. For integers, the greatest common divisor is distributive over the least common multiple, and vice versa: gcd(a, lcm(b, c)) = lcm(gcd(a, b), gcd(a, c)) and lcm(a, gcd(b, c)) = gcd(lcm(a, b), lcm(a, c)). 7. For real numbers, addition distributes over the maximum operation, and also over the minimum operation: a + max(b, c) = max(a + b, a + c) and a + min(b, c) = min(a + b, a + c).

17.4 Propositional logic

17.4.1 Rule of replacement

In standard truth-functional propositional logic, distribution[2][3][4] in logical proofs uses two valid rules of replacement to expand individual occurrences of certain logical connectives, within some formula, into separate applications of those connectives across subformulas of the given formula. The rules are:

(P ∧ (Q ∨ R)) ⇔ ((P ∧ Q) ∨ (P ∧ R)) and

(P ∨ (Q ∧ R)) ⇔ ((P ∨ Q) ∧ (P ∨ R)) where " ⇔ ", also written ≡, is a metalogical symbol representing “can be replaced in a proof with” or “is logically equivalent to”. 94 CHAPTER 17. DISTRIBUTIVE PROPERTY

17.4.2 Truth functional connectives

Distributivity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functional tautologies.

Distribution of conjunction over conjunction (P ∧ (Q ∧ R)) ↔ ((P ∧ Q) ∧ (P ∧ R))

Distribution of conjunction over disjunction [5] (P ∧ (Q ∨ R)) ↔ ((P ∧ Q) ∨ (P ∧ R))

Distribution of disjunction over conjunction [6] (P ∨ (Q ∧ R)) ↔ ((P ∨ Q) ∧ (P ∨ R))

Distribution of disjunction over disjunction (P ∨ (Q ∨ R)) ↔ ((P ∨ Q) ∨ (P ∨ R))

Distribution of implication (P → (Q → R)) ↔ ((P → Q) → (P → R))

Distribution of implication over equivalence (P → (Q ↔ R)) ↔ ((P → Q) ↔ (P → R))

Distribution of disjunction over equivalence (P ∨ (Q ↔ R)) ↔ ((P ∨ Q) ↔ (P ∨ R))

((P ∧ Q) ∨ (R ∧ S)) ↔ (((P ∨ R) ∧ (P ∨ S)) ∧ ((Q ∨ R) ∧ (Q ∨ S))) Double distribution ((P ∨ Q) ∧ (R ∨ S)) ↔ (((P ∧ R) ∨ (P ∧ S)) ∨ ((Q ∧ R) ∨ (Q ∧ S)))

17.5 Distributivity and rounding

In practice, the distributive property of multiplication (and division) over addition may appear to be compromised or lost because of the limitations of arithmetic precision. For example, the identity ⅓ + ⅓ + ⅓ = (1 + 1 + 1) / 3 appears to fail if the addition is conducted in decimal arithmetic; however, if many significant digits are used, the calculation will result in a closer approximation to the correct results. For example, if the arithmetical calculation takes the form: 0.33333 + 0.33333 + 0.33333 = 0.99999 ≠ 1, this result is a closer approximation than if fewer significant digits had been used. Even when fractional numbers can be represented exactly in arithmetical form, errors will be introduced if those arithmetical values are rounded or truncated. For example, buying two books, each priced at £14.99 before a tax of 17.5%, in two separate transactions will actually save £0.01, over buying them together: £14.99 × 1.175 = £17.61 to the nearest £0.01, giving a total expenditure of £35.22, but £29.98 × 1.175 = £35.23. Methods such as banker’s rounding may help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable.

17.6 Distributivity in rings

Distributivity is most commonly found in rings and distributive lattices. A ring has two binary operations (commonly called "+" and "∗"), and one of the requirements of a ring is that ∗ must distribute over +. Most kinds of numbers (example 1) and matrices (example 4) form rings. A lattice is another kind of algebraic structure with two binary operations, ∧ and ∨. If either of these operations (say ∧) distributes over the other (∨), then ∨ must also distribute over ∧, and the lattice is called distributive. See also the article on distributivity (order theory). Examples 4 and 5 are Boolean algebras, which can be interpreted either as a special kind of ring (a Boolean ring) or a special kind of distributive lattice (a Boolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra. Examples 6 and 7 are distributive lattices which are not Boolean algebras. Failure of one of the two distributive laws brings about near-rings and near-fields instead of rings and division rings respectively. The operations are usually configured to have the near-ring or near-field distributive on the right but not on the left. Rings and distributive lattices are both special kinds of rigs, certain generalizations of rings. Those numbers in example 1 that don't form rings at least form rigs. Near-rigs are a further generalization of rigs that are left-distributive but not right-distributive; example 2 is a near-rig. 17.7. GENERALIZATIONS OF DISTRIBUTIVITY 95

17.7 Generalizations of distributivity

In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially in order theory one finds numerous important variants of distributivity, some of which include infinitary operations, such as the infinite distributive law; others being defined in the presence of only one binary operation, such as the according definitions and their relations are given in the article distributivity (order theory). This also includes the notion of a completely distributive lattice. In the presence of an ordering relation, one can also weaken the above equalities by replacing = by either ≤ or ≥. Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion of sub-distributivity as explained in the article on interval arithmetic. In category theory, if (S, μ, η) and (S′, μ′, η′) are monads on a category C, a distributive law S.S′ → S′.S is a natural transformation λ : S.S′ → S′.S such that (S′, λ) is a lax map of monads S → S and (S, λ) is a colax map of monads S′ → S′. This is exactly the data needed to define a monad structure on S′.S: the multiplication map is S′μ.μ′S2.S′λS and the unit map is η′S.η. See: distributive law between monads. A generalized distributive law has also been proposed in the area of information theory.

17.7.1 Notions of antidistributivity

The ubiquitous identity that relates inverses to the binary operation in any group, namely (xy)−1 = y−1x−1, which is taken as an axiom in the more general context of a semigroup with involution, has sometimes been called an antidistributive property (of inversion as a unary operation).[7] In the context of a near-ring, which removes the commutativity of the additively written group and assumes only one- sided distributivity, one can speak of (two-sided) distributive elements but also of antidistributive elements. The latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements dis- tribute when multiplied on the left), then an antidistributive element a reverses the order of addition when multiplied to the right: (x + y)a = ya + xa.[8] In the study of propositional logic and Boolean algebra, the term antidistributive law is sometimes used to denote the interchange between conjunction and disjunction when implication factors over them:[9]

• (a ∨ b) ⇒ c ≡ (a ⇒ c) ∧ (b ⇒ c)

• (a ∧ b) ⇒ c ≡ (a ⇒ c) ∨ (b ⇒ c)

These two tautologies are a direct consequence of the duality in De Morgan’s laws.

17.8 Notes

[1] Ayres, p. 20

[2] Moore and Parker

[3] Copi and Cohen

[4] Hurley

[5] Russell and Whitehead, Principia Mathematica

[6] Russell and Whitehead, Principia Mathematica

[7] Chris Brink; Wolfram Kahl; Gunther Schmidt (1997). Relational Methods in Computer Science. Springer. p. 4. ISBN 978-3-211-82971-4.

[8] Celestina Cotti Ferrero; Giovanni Ferrero (2002). Nearrings: Some Developments Linked to Semigroups and Groups. Kluwer Academic Publishers. pp. 62 and 67. ISBN 978-1-4613-0267-4.

[9] Eric C.R. Hehner (1993). A Practical Theory of Programming. Springer Science & Business Media. p. 230. ISBN 978-1-4419-8596-5. 96 CHAPTER 17. DISTRIBUTIVE PROPERTY

17.9 References

• Ayres, Frank, Schaum’s Outline of Modern Abstract Algebra, McGraw-Hill; 1st edition (June 1, 1965). ISBN 0-07-002655-6.

17.10 External links

• A demonstration of the Distributive Law for integer arithmetic (from cut-the-knot) Chapter 18

Elementary algebra

The quadratic formula, which is the solution to the quadratic equation ax2 + bx + c = 0 . Here the symbols a, b, c, x all are variables that represent numbers.

Elementary algebra encompasses some of the basic concepts of algebra, one of the main branches of mathematics. It is typically taught to secondary school students and builds on their understanding of arithmetic. Whereas arithmetic deals with specified numbers,[1] algebra introduces quantities without fixed values, known as variables.[2] This use of variables entails a use of algebraic notation and an understanding of the general rules of the operators introduced in arithmetic. Unlike abstract algebra, elementary algebra is not concerned with algebraic structures outside the realm of real and complex numbers. The use of variables to denote quantities allows general relationships between quantities to be formally and concisely expressed, and thus enables solving a broader scope of problems. Most quantitative results in science and mathematics are expressed as algebraic equations.

18.1 Algebraic notation

Main article: Mathematical notation

Algebraic notation describes how algebra is written. It follows certain rules and conventions, and has its own termi- nology. For example, the expression 3x2 − 2xy + c has the following components:

97 98 CHAPTER 18. ELEMENTARY ALGEBRA y

3

2

1 x −3−2−1 0 1 2 3 −1 y = x2−x−2 −2

−3

Two-dimensional plot (magenta curve) of the algebraic equation y = x2 − x − 2

1 : Exponent (power), 2 : Coefficient, 3 : term, 4 : operator, 5 : constant, x, y : variables

A coefficient is a numerical value which multiplies a variable (the operator is omitted). A term is an addend or a summand, a group of coefficients, variables, constants and exponents that may be separated from the other terms by the plus and minus operators.[3] Letters represent variables and constants. By convention, letters at the beginning of the alphabet (e.g. a, b, c ) are typically used to represent constants, and those toward the end of the alphabet (e.g. x, y and z ) are used to represent variables.[4] They are usually written in italics.[5] Algebraic operations work in the same way as arithmetic operations,[6] such as addition, subtraction, multiplication, division and exponentiation.[7] and are applied to algebraic variables and terms. Multiplication symbols are usually 18.2. CONCEPTS 99 omitted, and implied when there is no space between two variables or terms, or when a coefficient is used. For example, 3 × x2 is written as 3x2 , and 2 × x × y may be written 2xy .[8] Usually terms with the highest power (exponent), are written on the left, for example, x2 is written to the left of x . When a coefficient is one, it is usually omitted (e.g. 1x2 is written x2 ).[9] Likewise when the exponent (power) is one, (e.g. 3x1 is written 3x ).[10] When the exponent is zero, the result is always 1 (e.g. x0 is always rewritten to 1 ).[11] However 00 , being undefined, should not appear in an expression, and care should be taken in simplifying expressions in which variables may appear in exponents.

18.1.1 Alternative notation

Other types of notation are used in algebraic expressions when the required formatting is not available, or can not be implied, such as where only letters and symbols are available. For example, exponents are usually formatted using superscripts, e.g. x2 . In plain text, and in the TeX mark-up language, the caret symbol "^" represents exponents, so x2 is written as “x^2”.[12][13] In programming languages such as Ada,[14] Fortran,[15] Perl,[16] Python [17] and Ruby,[18] a double asterisk is used, so x2 is written as “x**2”. Many programming languages and calculators use a single asterisk to represent the multiplication symbol,[19] and it must be explicitly used, for example, 3x is written “3*x”.

18.2 Concepts

18.2.1 Variables

Main article: Variable (mathematics)

Elementary algebra builds on and extends arithmetic[20] by introducing letters called variables to represent general (non-specified) numbers. This is useful for several reasons.

1. Variables may represent numbers whose values are not yet known. For example, if the temperature today, T, is 20 degrees higher than the temperature yesterday, Y, then the problem can be described algebraically as T = Y + 20 .[21]

2. Variables allow one to describe general problems,[22] without specifying the values of the quantities that are involved. For example, it can be stated specifically that 5 minutes is equivalent to 60 × 5 = 300 seconds. A more general (algebraic) description may state that the number of seconds, s = 60 × m , where m is the number of minutes.

3. Variables allow one to describe mathematical relationships between quantities that may vary.[23] For example, the relationship between the circumference, c, and diameter, d, of a circle is described by π = c/d .

4. Variables allow one to describe some mathematical properties. For example, a basic property of addition is commutativity which states that the order of numbers being added together does not matter. Commutativity is stated algebraically as (a + b) = (b + a) .[24]

18.2.2 Evaluating expressions

Main article: Expression (mathematics)

Algebraic expressions may be evaluated and simplified, based on the basic properties of arithmetic operations (addition, subtraction, multiplication, division and exponentiation). For example,

• Added terms are simplified using coefficients. For example, x + x + x can be simplified as 3x (where 3 is the coefficient).

• Multiplied terms are simplified using exponents. For example, x × x × x is represented as x3 100 CHAPTER 18. ELEMENTARY ALGEBRA

Example of variables showing the relationship between a circle’s diameter and its circumference. For any circle, its circumference c , divided by its diameter d , is equal to the constant pi, π (approximately 3.14).

• Like terms are added together,[25] for example, 2x2 + 3ab − x2 + ab is written as x2 + 4ab , because the terms containing x2 are added together, and, the terms containing ab are added together. • Brackets can be “multiplied out”, using distributivity. For example, x(2x + 3) can be written as (x × 2x) + (x × 3) which can be written as 2x2 + 3x • Expressions can be factored. For example, 6x5 + 3x2 , by dividing both terms by 3x2 can be written as 3x2(2x3 + 1)

18.2.3 Equations

Main article: Equation

An equation states that two expressions are equal using the symbol for equality, = (the equals sign).[26] One of the most well-known equations describes Pythagoras’ law relating the length of the sides of a right angle triangle:[27]

c2 = a2 + b2 18.2. CONCEPTS 101

Animation illustrating Pythagoras’ rule for a right-angle triangle, which shows the algebraic relationship between the triangle’s hypotenuse, and the other two sides.

This equation states that c2 , representing the square of the length of the side that is the hypotenuse (the side opposite the right angle), is equal to the sum (addition) of the squares of the other two sides whose lengths are represented by a and b . An equation is the claim that two expressions have the same value and are equal. Some equations are true for all values of the involved variables (such as a + b = b + a ); such equations are called identities. Conditional equations are true for only some values of the involved variables, e.g. x2 − 1 = 8 is true only for x = 3 and x = −3 . The values of the variables which make the equation true are the solutions of the equation and can be found through equation solving. Another type of equation is an inequality. Inequalities are used to show that one side of the equation is greater, or less, than the other. The symbols used for this are: a > b where > represents 'greater than', and a < b where < represents 'less than'. Just like standard equality equations, numbers can be added, subtracted, multiplied or divided. The only exception is that when multiplying or dividing by a negative number, the inequality symbol must be flipped. 102 CHAPTER 18. ELEMENTARY ALGEBRA

Properties of equality

By definition, equality is an equivalence relation, meaning it has the properties (a) reflexive (i.e. b = b ), (b) symmetric (i.e. if a = b then b = a ) (c) transitive (i.e. if a = b and b = c then a = c ).[28] It also satisfies the important property that if two symbols are used for equal things, then one symbol can be substituted for the other in any true statement about the first and the statement will remain true. This implies the following properties:

• if a = b and c = d then a + c = b + d and ac = bd ; • if a = b then a + c = b + c ; • more generally, for any function f , if a = b then f(a) = f(b) .

Properties of inequality

The relations less than < and greater than > have the property of transitivity:[29]

• If a < b and b < c then a < c ; • If a < b and c < d then a + c < b + d ;[30] • If a < b and c > 0 then ac < bc ; • If a < b and c < 0 then bc < ac .

By reversing the inequation, < and > can be swapped,[31] for example:

• a < b is equivalent to b > a

18.2.4 Substitution

Main article: Substitution (algebra) See also: Substitution (logic)

Substitution is replacing the terms in an expression to create a new expression. Substituting 3 for a in the expression a*5 makes a new expression 3*5 with meaning 15. Substituting the terms of a statement makes a new statement. When the original statement is true independent of the values of the terms, the statement created by substitutions is also true. Hence definitions can be made in symbolic terms and interpreted through substitution: if a2 := a ∗ a , where := means “is defined to equal”, substituting 3 for a informs the reader of this statement that 32 means 3*3=9. Often it’s not known whether the statement is true independent of the values of the terms, and substitution allows one to derive restrictions on the possible values, or show what conditions the statement holds under. For example, taking the statement x+1=0, if x is substituted with 1, this imples 1+1=2=0, which is false, which implies that if x+1=0 then x can't be 1. If x and y are integers, rationals, or real numbers, then xy=0 implies x=0 or y=0. Suppose abc=0. Then, substituting a for x and bc for y, we learn a=0 or bc=0. Then we can substitute again, letting x=b and y=c, to show that if bc=0 then b=0 or c=0. Therefore, if abc=0, then a=0 or (b=0 or c=0), so abc=0 implies a=0 or b=0 or c=0. Consider if the original fact were stated as "ab=0 implies a=0 or b=0.” Then when we say “suppose abc=0,” we have a conflict of terms when we substitute. Yet the above logic is still valid to show that if abc=0 then a=0 or b=0 or c=0 if instead of letting a=a and b=bc we substitute a for a and b for bc (and with bc=0, substituting b for a and c for b). This shows that substituting for the terms in a statement isn't always the same as letting the terms from the statement equal the substituted terms. In this situation it’s clear that if we substitute an expression a into the a term of the original equation, the a substituted does not refer to the a in the statement "ab=0 implies a=0 or b=0.”

18.3 Solving algebraic equations

See also: Equation solving The following sections lay out examples of some of the types of algebraic equations that may be encountered. 18.3. SOLVING ALGEBRAIC EQUATIONS 103

A typical algebra problem.

18.3.1 Linear equations with one variable

Main article: Linear equation

Linear equations are so-called, because when they are plotted, they describe a straight line. The simplest equations to solve are linear equations that have only one variable. They contain only constant numbers and a single variable without an exponent. As an example, consider:

Problem in words: If you double my son’s age and add 4, the resulting answer is 12. How old is my son?

Equivalent equation: 2x + 4 = 12 where x represent my son’s age

To solve this kind of equation, the technique is add, subtract, multiply, or divide both sides of the equation by the same number in order to isolate the variable on one side of the equation. Once the variable is isolated, the other side of the equation is the value of the variable.[32] This problem and its solution are as follows: In words: my son’s age is 4. The general form of a linear equation with one variable, can be written as: ax + b = c Following the same procedure (i.e. subtract b from both sides, and then divide by a ), the general solution is given by c−b x = a

18.3.2 Linear equations with two variables

A linear equation with two variables has many (i.e. an infinite number of) solutions.[33] For example:

Problem in words: I am 22 years older than my son. How old are we? Equivalent equation: y = x + 22 where y is my age, x is my son’s age.

This can not be worked out by itself. If I told you my son’s age, then there would no longer be two unknowns (variables), and the problem becomes a linear equation with just one variable, that can be solved as described above. 104 CHAPTER 18. ELEMENTARY ALGEBRA

Solving two linear equations with a unique solution at the point that they intersect.

To solve a linear equation with two variables (unknowns), requires two related equations. For example, if I also revealed that: 18.3. SOLVING ALGEBRAIC EQUATIONS 105

Now there are two related linear equations, each with two unknowns, which lets us produce a linear equation with just one variable, by subtracting one from the other (called the elimination method):[34] In other words, my son is aged 12, and as I am 22 years older, I must be 34. In 10 years time, my son will be 22, and I will be twice his age, 44. This problem is illustrated on the associated plot of the equations. For other ways to solve this kind of equations, see below, System of linear equations.

18.3.3 Quadratic equations

Main article: Quadratic equation A quadratic equation is one which includes a term with an exponent of 2, for example, x2 ,[35] and no term with

Quadratic equation plot of y = x2 + 3x − 10 showing its roots at x = −5 and x = 2 , and that the quadratic can be rewritten as y = (x + 5)(x − 2)

higher exponent. The name derives from the Latin quadrus, meaning square.[36] In general, a quadratic equation can be expressed in the form ax2 + bx + c = 0 ,[37] where a is not zero (if it were zero, then the equation would not be quadratic but linear). Because of this a quadratic equation must contain the term ax2 , which is known as the quadratic term. Hence a ≠ 0 , and so we may divide by a and rearrange the equation into the standard form 106 CHAPTER 18. ELEMENTARY ALGEBRA

x2 + px + q = 0 where p = b/a and q = c/a . Solving this, by a process known as completing the square, leads to the quadratic formula

√ −b  b2 − 4ac x = , 2a where the symbol "±" indicates that both

√ √ −b + b2 − 4ac −b − b2 − 4ac x = and x = 2a 2a are solutions of the quadratic equation. Quadratic equations can also be solved using factorization (the reverse process of which is expansion, but for two linear terms is sometimes denoted foiling). As an example of factoring:

x2 + 3x − 10 = 0, which is the same thing as

(x + 5)(x − 2) = 0. It follows from the zero-product property that either x = 2 or x = −5 are the solutions, since precisely one of the factors must be equal to zero. All quadratic equations will have two solutions in the complex number system, but need not have any in the real number system. For example,

x2 + 1 = 0 has no real number solution since no real number squared equals −1. Sometimes a quadratic equation has a root of multiplicity 2, such as:

(x + 1)2 = 0. For this equation, −1 is a root of multiplicity 2. This means −1 appears two times, since the equation can be rewritten in factored form as

[x − (−1)][x − (−1)] = 0.

Complex numbers

All quadratic equations have two solutions in complex numbers, a category that includes real numbers, imaginary numbers, and sums of real and imaginary numbers. Complex numbers first arise in the teaching of quadratic equations and the quadratic formula. For example, the quadratic equation

x2 + x + 1 = 0 has solutions

√ √ −1 + −3 −1 − −3 x = and x = . 2 2 √ Since −3 is not any real number, both of these solutions for x are complex numbers. 18.3. SOLVING ALGEBRAIC EQUATIONS 107

18.3.4 Exponential and logarithmic equations

Main article: Logarithm An exponential equation is one which has the form ax = b for a > 0 ,[38] which has solution

The graph of the logarithm to base 2 crosses the x axis (horizontal axis) at 1 and passes through the points with coordinates (2, 1), 3 (4, 2), and (8, 3). For example, log2(8) = 3, because 2 = 8. The graph gets arbitrarily close to the y axis, but does not meet or intersect it.

ln b X = log b = a ln a when b > 0 . Elementary algebraic techniques are used to rewrite a given equation in the above way before arriving at the solution. For example, if

3 · 2x−1 + 1 = 10 then, by subtracting 1 from both sides of the equation, and then dividing both sides by 3 we obtain

2x−1 = 3 whence

− x 1 = log2 3 108 CHAPTER 18. ELEMENTARY ALGEBRA or

x = log2 3 + 1.

A logarithmic equation is an equation of the form loga(x) = b for a > 0 , which has solution

X = ab.

For example, if

− − 4 log5(x 3) 2 = 6 then, by adding 2 to both sides of the equation, followed by dividing both sides by 4, we get

− log5(x 3) = 2 whence x − 3 = 52 = 25 from which we obtain x = 28.

18.3.5 Radical equations

Radical equation showing two ways to represent the same expression. The triple bar means the equation is true for all values of x √ √ A radical equation is one that includes a radical sign, which includes square roots, x , cube roots, 3 x , and nth √ √ 1 roots, n x . Recall that an nth root can be√ rewritten in exponential format, so that n x is equivalent to x n . Combined 2 3 [39] with regular exponents (powers),√ then x3 (the square root of x cubed), can be rewritten as x 2 . So a common n m form of a radical equation is xm = a (equivalent to x n = a ) where m and n are integers. It has real solution(s): For example, if:

(x + 5)2/3 = 4, then 18.3. SOLVING ALGEBRAIC EQUATIONS 109

√ x + 5 = ( 4)3 x + 5 = 8 x = −5  8 x = 3, −13

18.3.6 System of linear equations

Main article: System of linear equations

There are different methods to solve a system of linear equations with two variables.

Elimination method

(2,3)

x-y=-1 3x+y=9

The solution set for the equations x − y = −1 and 3x + y = 9 is the single point (2, 3).

An example of solving a system of linear equations is by using the elimination method: 110 CHAPTER 18. ELEMENTARY ALGEBRA

{ 4x + 2y = 14 2x − y = 1.

Multiplying the terms in the second equation by 2:

4x + 2y = 14

4x − 2y = 2. Adding the two equations together to get:

8x = 16 which simplifies to x = 2.

Since the fact that x = 2 is known, it is then possible to deduce that y = 3 by either of the original two equations (by using 2 instead of x ) The full solution to this problem is then

{ x = 2 y = 3.

Note that this is not the only way to solve this specific system; y could have been solved before x .

Substitution method

Another way of solving the same system of linear equations is by substitution.

{ 4x + 2y = 14 2x − y = 1.

An equivalent for y can be deduced by using one of the two equations. Using the second equation:

2x − y = 1

Subtracting 2x from each side of the equation:

2x − 2x − y = 1 − 2x −y = 1 − 2x and multiplying by −1: y = 2x − 1.

Using this y value in the first equation in the original system: 18.3. SOLVING ALGEBRAIC EQUATIONS 111

4x + 2(2x − 1) = 14 4x + 4x − 2 = 14 8x − 2 = 14

Adding 2 on each side of the equation:

8x − 2 + 2 = 14 + 2 8x = 16

which simplifies to

x = 2

Using this value in one of the equations, the same solution as in the previous method is obtained.

{ x = 2 y = 3.

Note that this is not the only way to solve this specific system; in this case as well, y could have been solved before x .

18.3.7 Other types of systems of linear equations

Inconsistent systems

In the above example, a solution exists. However, there are also systems of equations which do not have any solution. Such a system is called inconsistent. An obvious example is

{ x + y = 1 0x + 0y = 2 .

As 0≠2, the second equation in the system has no solution. Therefore, the system has no solution. However, not all inconsistent systems are recognized at first sight. As an example, let us consider the system

{ 4x + 2y = 12 −2x − y = −4 .

Multiplying by 2 both sides of the second equation, and adding it to the first one results in

0x + 0y = 4 ,

which has clearly no solution.

Undetermined systems

There are also systems which have infinitely many solutions, in contrast to a system with a unique solution (meaning, a unique pair of values for x and y ) For example: 112 CHAPTER 18. ELEMENTARY ALGEBRA

The equations 3x + 2y = 6 and 3x + 2y = 12 are parallel and cannot intersect, and is unsolvable.

{ 4x + 2y = 12 −2x − y = −6

Isolating y in the second equation:

y = −2x + 6 And using this value in the first equation in the system:

4x + 2(−2x + 6) = 12 4x − 4x + 12 = 12 12 = 12 The equality is true, but it does not provide a value for x . Indeed, one can easily verify (by just filling in some values of x ) that for any x there is a solution as long as y = −2x + 6 . There is an infinite number of solutions for this system. 18.4. SEE ALSO 113

Plot of a quadratic equation (red) and a linear equation (blue) that do not intersect, and consequently for which there is no common solution.

Over- and underdetermined systems

Systems with more variables than the number of linear equations are called underdetermined. Such a system, if it has any solutions, does not have a unique one but rather an infinitude of them. An example of such a system is

{ x + 2y = 10 y − z = 2.

When trying to solve it, one is led to express some variables as functions of the other ones if any solutions exist, but cannot express all solutions numerically because there are an infinite number of them if there are any. A system with a greater number of equations than variables is called overdetermined. If an overdetermined system has any solutions, necessarily some equations are linear combinations of the others.

18.4 See also

• History of elementary algebra 114 CHAPTER 18. ELEMENTARY ALGEBRA

• Binary operation • Gaussian elimination • Mathematics education • Number line • Polynomial

18.5 References

• Leonhard Euler, Elements of Algebra, 1770. English translation Tarquin Press, 2007, ISBN 978-1-899618- 79-8, also online digitized editions[40] 2006,[41] 1822. • Charles Smith, A Treatise on Algebra, in Cornell University Library Historical Math Monographs. • Redden, John. Elementary Algebra. Flat World Knowledge, 2011

[1] H.E. Slaught and N.J. Lennes, Elementary algebra, Publ. Allyn and Bacon, 1915, page 1 (republished by Forgotten Books)

[2] Lewis Hirsch, Arthur Goodman, Understanding Elementary Algebra With Geometry: A Course for College Students, Pub- lisher: Cengage Learning, 2005, ISBN 0534999727, 9780534999728, 654 pages, page 2

[3] Richard N. Aufmann, Joanne Lockwood, Introductory Algebra: An Applied Approach, Publisher Cengage Learning, 2010, ISBN 1439046042, 9781439046043, page 78

[4] William L. Hosch (editor), The Britannica Guide to Algebra and Trigonometry, Britannica Educational Publishing, The Rosen Publishing Group, 2010, ISBN 1615302190, 9781615302192, page 71

[5] James E. Gentle, Numerical Linear Algebra for Applications in Statistics, Publisher: Springer, 1998, ISBN 0387985425, 9780387985428, 221 pages, [James E. Gentle page 183]

[6] Horatio Nelson Robinson, New elementary algebra: containing the rudiments of science for schools and academies, Ivison, Phinney, Blakeman, & Co., 1866, page 7

[7] Ron Larson, Robert Hostetler, Bruce H. Edwards, Algebra And Trigonometry: A Graphing Approach, Publisher: Cengage Learning, 2007, ISBN 061885195X, 9780618851959, 1114 pages, page 6

[8] Sin Kwai Meng, Chip Wai Lung, Ng Song Beng, “Algebraic notation”, in Mathematics Matters Secondary 1 Express Text- book, Publisher Panpac Education Pte Ltd, ISBN 9812738827, 9789812738820, page 68

[9] David Alan Herzog, Teach Yourself Visually Algebra, Publisher John Wiley & Sons, 2008, ISBN 0470185597, 9780470185599, 304 pages, page 72

[10] John C. Peterson, Technical Mathematics With Calculus, Publisher Cengage Learning, 2003, ISBN 0766861899, 9780766861893, 1613 pages, page 31

[11] Jerome E. Kaufmann, Karen L. Schwitters, Algebra for College Students, Publisher Cengage Learning, 2010, ISBN 0538733543, 9780538733540, 803 pages, page 222

[12] Ramesh Bangia, Dictionary of Information Technology, Publisher Laxmi Publications, Ltd., 2010, ISBN 9380298153, 9789380298153, page 212

[13] George Grätzer, First Steps in LaTeX, Publisher Springer, 1999, ISBN 0817641327, 9780817641320, page 17

[14] S. Tucker Taft, Robert A. Duff, Randall L. Brukardt, Erhard Ploedereder, Pascal Leroy, Ada 2005 Reference Manual, Volume 4348 of Lecture Notes in Computer Science, Publisher Springer, 2007, ISBN 3540693351, 9783540693352, page 13

[15] C. Xavier, Fortran 77 And Numerical Methods, Publisher New Age International, 1994, ISBN 812240670X, 9788122406702, page 20

[16] Randal Schwartz, Brian Foy, Tom Phoenix, Learning Perl, Publisher O'Reilly Media, Inc., 2011, ISBN 1449313140, 9781449313142, page 24

[17] Matthew A. Telles, Python Power!: The Comprehensive Guide, Publisher Course Technology PTR, 2008, ISBN 1598631586, 9781598631586, page 46 18.6. EXTERNAL LINKS 115

[18] Kevin C. Baird, Ruby by Example: Concepts and Code, Publisher No Starch Press, 2007, ISBN 1593271484, 9781593271480, page 72 [19] William P. Berlinghoff, Fernando Q. Gouvêa, Math through the Ages: A Gentle History for Teachers and Others, Publisher MAA, 2004, ISBN 0883857367, 9780883857366, page 75 [20] Thomas Sonnabend, Mathematics for Teachers: An Interactive Approach for Grades K-8, Publisher: Cengage Learning, 2009, ISBN 0495561665, 9780495561668, 759 pages, page xvii [21] Lewis Hirsch, Arthur Goodman, Understanding Elementary Algebra With Geometry: A Course for College Students, Pub- lisher: Cengage Learning, 2005, ISBN 0534999727, 9780534999728, 654 pages, page 48 [22] Lawrence S. Leff, College Algebra: Barron’s Ez-101 Study Keys, Publisher: Barron’s Educational Series, 2005, ISBN 0764129147, 9780764129148, 230 pages, page 2 [23] Ron Larson, Kimberly Nolting, Elementary Algebra, Publisher: Cengage Learning, 2009, ISBN 0547102275, 9780547102276, 622 pages, page 210 [24] Charles P. McKeague, Elementary Algebra, Publisher: Cengage Learning, 2011, ISBN 0840064217, 9780840064219, 571 pages, page 49 [25] Andrew Marx, Shortcut Algebra I: A Quick and Easy Way to Increase Your Algebra I Knowledge and Test Scores, Publisher Kaplan Publishing, 2007, ISBN 1419552880, 9781419552885, 288 pages, page 51 [26] Mark Clark, Cynthia Anfinson, Beginning Algebra: Connecting Concepts Through Applications, Publisher Cengage Learn- ing, 2011, ISBN 0534419380, 9780534419387, 793 pages, page 134 [27] Alan S. Tussy, R. David Gustafson, Elementary and Intermediate Algebra, Publisher Cengage Learning, 2012, ISBN 1111567689, 9781111567682, 1163 pages, page 493 [28] Douglas Downing, Algebra the Easy Way, Publisher Barron’s Educational Series, 2003, ISBN 0764119729, 9780764119729, 392 pages, page 20 [29] Ron Larson, Robert Hostetler, Intermediate Algebra, Publisher Cengage Learning, 2008, ISBN 0618753524, 9780618753529, 857 pages, page 96 [30] http://math.stackexchange.com/a/1043755/19368 [31] Chris Carter, Physics: Facts and Practice for A Level, Publisher Oxford University Press, 2001, ISBN 019914768X, 9780199147687, 144 pages, page 50 [32] Slavin, Steve (1989). All the Math You'll Ever Need. John Wiley & Sons. p. 72. ISBN 0-471-50636-2. [33] Sinha, The Pearson Guide to Quantitative Aptitude for CAT 2/ePublisher: Pearson Education India, 2010, ISBN 8131723666, 9788131723661, 599 pages, page 195 [34] Cynthia Y. Young, Precalculus, Publisher John Wiley & Sons, 2010, ISBN 0471756849, 9780471756842, 1175 pages, page 699 [35] Mary Jane Sterling, Algebra II For Dummies, Publisher: John Wiley & Sons, 2006, ISBN 0471775819, 9780471775812, 384 pages, page 37 [36] John T. Irwin, The Mystery to a Solution: Poe, Borges, and the Analytic Detective Story, Publisher JHU Press, 1996, ISBN 0801854660, 9780801854668, 512 pages, page 372 [37] Sharma/khattar, The Pearson Guide To Objective Mathematics For Engineering Entrance Examinations, 3/E, Publisher Pear- son Education India, 2010, ISBN 8131723631, 9788131723630, 1248 pages, page 621 [38] Aven Choo, LMAN OL Additional Maths Revision Guide 3, Publisher Pearson Education South Asia, 2007, ISBN 9810600011, 9789810600013, page 105 [39] John C. Peterson, Technical Mathematics With Calculus, Publisher Cengage Learning, 2003, ISBN 0766861899, 9780766861893, 1613 pages, page 525 [40] Euler’s Elements of Algebra [41] Elements of algebra – Leonhard Euler, John Hewlett, Francis Horner, Jean Bernoulli, Joseph Louis Lagrange – Google Books

18.6 External links Chapter 19

Equating coefficients

In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form.

19.1 Example in real fractions

Suppose we want to apply partial fraction decomposition to the expression:

1 , x(x − 1)(x − 2) that is, we want to bring it into the form:

A B C + + , x x − 1 x − 2 in which the unknown parameters are A, B and C. Multiplying these formulas by x(x − 1)(x − 2) turns both into polynomials, which we equate:

A(x − 1)(x − 2) + Bx(x − 2) + Cx(x − 1) = 1, or, after expansion and collecting terms with equal powers of x:

(A + B + C)x2 − (3A + 2B + C)x + 2A = 1. At this point it is essential to realize that the polynomial 1 is in fact equal to the polynomial 0x2 + 0x + 1, having zero coefficients for the positive powers of x. Equating the corresponding coefficients now results in this system of linear equations:

A + B + C = 0, 3A + 2B + C = 0, 2A = 1. Solving it results in:

1 1 A = ,B = −1,C = . 2 2

116 19.2. EXAMPLE IN NESTED RADICALS 117

19.2 Example in nested radicals

A similar problem,√ involving√ equating like terms rather than coefficients of like terms, arises if we wish to denest the nested radicals a + b c to obtain an equivalent expression not involving a square root of an expression itself involving a square root, we can postulate the existence of rational parameters d, e such that

√ √ √ √ a + b c = d + e. Squaring both sides of this equation yields:

√ √ a + b c = d + e + 2 de.

To find√ d and√e we equate the terms not involving square roots, so a = d + e, and equate the parts involving radicals, so b c = 2 de which when squared implies b2c = 4de. This gives us two equations, one quadratic and one linear, in the desired parameters d and e, and these can be solved to obtain

√ a + a2 − b2c e = , 2 √ a − a2 − b2c d = , 2 √ which is a valid solution pair if and only if a2 − b2c is a rational number.

19.3 Example of testing for linear dependence of equations

Consider this overdetermined equation system (with 3 equations in just 2 unknowns):

x − 2y + 1 = 0, 3x + 5y − 8 = 0, 4x + 3y − 7 = 0. To test whether the third equation is linearly dependent on the first two, postulate two parameters a and b such that a times the first equation plus b times the second equation equals the third equation. Since this always holds for the right sides, all of which are 0, we merely need to require it to hold for the left sides as well:

a(x − 2y + 1) + b(3x + 5y − 8) = 4x + 3y − 7. Equating the coefficients of x on both sides, equating the coefficients of y on both sides, and equating the constants on both sides gives the following system in the desired parameters a, b:

a + 3b = 4, −2a + 5b = 3, a − 8b = −7. The unique pair of values a, b satisfying the first two equations is (a, b) = (1, 1); since these values also satisfy the third equation, there do in fact exist a, b such that a times the original first equation plus b times the original second equation equals the original third equation; we conclude that the third equation is linearly dependent on the first two. Note that if the constant term in the original third equation had been anything other than –7, the values (a, b) = (1, 1) that satisfied the first two equations in the parameters would not have satisfied the third one (a–8b = constant), so there would exist no a, b satisfying all three equations in the parameters, and therefore the third original equation would be independent of the first two. 118 CHAPTER 19. EQUATING COEFFICIENTS

19.4 Example in complex numbers

The method of equating coefficients is often used when dealing with complex numbers. For example, to divide the complex number a+bi by the complex number c+di, we postulate that the ratio equals the complex number e+fi, and we wish to find the values of the parameters e and f for which this is true. We write

a + bi = e + fi, c + di and multiply both sides by the denominator to obtain

(ce − fd) + (ed + cf)i = a + bi.

Equating real terms gives ce − fd = a, and equating coefficients of the imaginary unit i gives

ed + cf = b.

These are two equations in the unknown parameters e and f, and they can be solved to obtain the desired coefficients of the quotient:

ac + bd bc − ad e = and f = . c2 + d2 c2 + d2

19.5 References

• Tanton, James (2005). Encyclopedia of Mathematics. Facts on File. p. 162. ISBN 0-8160-5124-0. Chapter 20

Equation

For other uses, see Equation (disambiguation). In mathematics, an equation is an equality containing one or more variables. Solving the equation consists of

The first use of an equals sign, equivalent to 14x + 15 = 71 in modern notation. From The Whetstone of Witte by Robert Recorde (1557).

determining which values of the variables make the equality true. In this situation, variables are also known as unknowns and the values which satisfy the equality are known as solutions. An equation differs from an identity in that an equation is not necessarily true for all possible values of the variable.[1][2] There are many types of equations, and they are found in all areas of mathematics; the techniques used to examine them differ according to their type. Algebra studies two main families of equations: polynomial equations and, among them, linear equations. Polynomial equations have the form P(X) = 0, where P is a polynomial. Linear equations have the form a(x) + b = 0, where a is a linear function and b is a vector. To solve them, one uses algorithmic or geometric techniques, coming from linear algebra or mathematical analysis. Changing the domain of a function can change the problem considerably. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions. Geometry uses equations to describe geometric figures. The objective is now different, as equations are used to describe geometric properties. In this context, there are two large families of equations, Cartesian equations and parametric equations. Differential equations are equations involving one or more functions and their derivatives. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model real-life processes in areas such as physics, chemistry, biology, and economics. The "=" symbol was invented by Robert Recorde (1510–1558), who considered that nothing could be more equal than parallel straight lines with the same length.

20.1 Introduction

20.1.1 Parameters and unknowns

See also: Expression (mathematics)

119 120 CHAPTER 20. EQUATION

A strange attractor which arises when solving a certain differential equation.

Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters. Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, …, while coefficients are denoted by letters at the beginning, a, b, c, d, … . For example, the general quadratic equation is usually written ax2 + bx + c = 0. The process of finding the solutions, or in case of parameters, expressing the unknowns in terms of the parameters is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions. A system of equations is a set of simultaneous equations, usually in several unknowns, for which the common solutions are sought. Thus a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system

3x + 5y = 2 5x + 8y = 3 has the unique solution x = −1, y = 1.

20.1.2 Analogous illustration

A weighing scale, balance, or seesaw is often presented as an analogy to an equation. Each side of the balance corresponds to one side of the equation. Different quantities can be placed on each side: if the weights on the two sides are equal the scale balances, corresponding to an equality represented by an equation; if not, then the lack of balance corresponds to an inequality represented by an inequation. In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds 20.1. INTRODUCTION 121

z z

+ + + −

y x y y = 1

x 3y 1 2z

Illustration of a simple equation; x, y, z are real numbers, analogous to weights. to removing weight from what is already there. When equality holds, the total weight on each side is the same.

20.1.3 Identities

Main articles: Identity (mathematics) and List of trigonometric identities

An identity is a statement resembling an equation which is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, it is often useful to combine it with an identity to produce an equation which is more easily soluble. In algebra, a simple identity is the difference of two squares:

x2 − y2 = (x + y)(x − y) which is true for all x and y. 122 CHAPTER 20. EQUATION

Trigonometry is an area where many identities exist, and are useful in manipulating or solving trigonometric equations, two of many including the sine and cosine functions are:

sin2(θ) + cos2(θ) = 1 and

sin(2θ) = 2 sin(θ) cos(θ) which are both true for all values of θ. For example, to solve the equation:

3 sin(θ) cos(θ) = 1 , where θ is known to be between 0 and 45 degrees, using the identity for the product gives

3 sin(2θ) = 1 , 2 yielding the solution

( ) 1 2 θ = arcsin ≈ 20.9◦. 2 3 Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, the fact that θ is between 0 and 45 degrees implies there is only one solution.

20.2 Properties

Two equations or two systems of equations are equivalent if they have the same set of solutions. The following operations transform an equation or a system into an equivalent one:

• Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equiv- alent to an equation in which the right-hand side is zero. • Multiplying or dividing both sides of an equation by a non-zero constant. • Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum. • For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity.

If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation x = 1 has the solution x = 1. Raising both sides to the exponent of 2 (which means applying the function f(s) = s2 to both sides of the equation) changes the equation to x2 = 1 , which not only has the previous solution but also introduces the extraneous solution, x = −1. Moreover, If the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation. The above transformations are the basis of most elementary methods for equation solving as well as some less ele- mentary ones, like Gaussian elimination. For more details on this topic, see Equation solving. 20.3. ALGEBRA 123

20.3 Algebra

20.3.1 Polynomial equations

Main article: Polynomial equation

An algebraic equation or polynomial equation is an equation of the form

P = 0

P = Q where P and Q are polynomials with coefficients in some field, often the field of the rational numbers. An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate and the term polynomial equation is usually preferred to algebraic equation. For example,

x5 − 3x + 1 = 0

is an algebraic equation with integer coefficients and

xy x3 1 y4 + = − xy2 + y2 − 2 3 7 is a multivariate polynomial equation over the rationals. Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression with a finite number of operations involving just those coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations but not for all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of an univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).

20.3.2 Systems of linear equations

A system of linear equations (or linear system) is a collection of linear equations involving the same set of variables.[3] For example,

3x + 2y − z = 1 2x − 2y + 4z = −2 − 1 − x + 2 y z = 0 is a system of three equations in the three variables x, y, z.A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by

x = 1 y = −2 z = −2 since it makes all three equations valid. The word "system" indicates that the equations are to be considered collec- tively, rather than individually. 124 CHAPTER 20. EQUATION

The Nine Chapters on the Mathematical Art is an anonymous Chinese book proposing a method of resolution for linear equations.

In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics.A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. 20.4. GEOMETRY 125

20.4 Geometry

20.4.1 Analytic geometry

A conic section is the intersection of a plane and a cone of revolution.

In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form ax + by + cz + d = 0 , where a, b, c and d are real numbers and x, y, z are the unknowns which correspond to the coordinates of a point in the system given by the orthogonal grid. The values a, b, c are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in R⊭ or as the solution set of two linear equations with values in R . A conic section is the intersection of a cone with equation x2 + y2 = z2 and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a plane just given. This formalism allows one to determine the positions and the properties of the focuses of a conic. The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians. Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to charac- terize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.

20.4.2 Cartesian equations

A Cartesian coordinate system is a coordinate system that specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length. One can use the same principle to specify the position of any point in three-dimensional space by three Cartesian co- ordinates, its signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines). 126 CHAPTER 20. EQUATION

y

3

2 2 2 x + y = 4 1

-3 -2 -1 1 2 3 x -1

-2

-3

Cartesian coordinate system with a circle of radius 2 centered at the origin marked in red. The equation of a circle is (x − a)2 + (y − b)2 = r2 where a and b are the coordinates of the center (a, b) and r is the radius.

The invention of Cartesian coordinates in the 17th century by René Descartes (Latinized name: Cartesius) revo- lutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4.

20.4.3 Parametric equations

Main article: Parametric equation

A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter.[4][5] For example,

x = cos t y = sin t 20.5. NUMBER THEORY 127

are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a para- metric representation of the curve. The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).

20.5 Number theory

20.5.1 Diophantine equations

Main article: Diophantine equation

A Diophantine equation is a polynomial equation in two or more unknowns such that only the integer solutions are searched or studied (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An exponential Diophantine equation is one in which exponents on terms can be unknowns. Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.

20.5.2 Algebraic and transcendental numbers

Main articles: Algebraic number and Transcendental number

An algebraic number is a number that is a root of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.

20.5.3 Algebraic geometry

Main article: Algebraic geometry

Algebraic geometry is a branch of mathematics, classically studying zeros of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.

20.6 Differential equations

Main article: Differential equation 128 CHAPTER 20. EQUATION

A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form. If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differ- ential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.

20.6.1 Ordinary differential equations

Main article: Ordinary differential equation

An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation which may be with respect to more than one independent variable. Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.

20.6.2 Partial differential equations

Main article: Partial differential equation

A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.

20.7 Types of equations

Equations can be classified according to the types of operations and quantities involved. Important types include:

• An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree:

• linear equation for degree one • quadratic equation for degree two • cubic equation for degree three • quartic equation for degree four 20.8. SEE ALSO 129

• quintic equation for degree five • sextic equation for degree six • septic equation for degree seven • A Diophantine equation is an equation where the unknowns are required to be integers • A transcendental equation is an equation involving a transcendental function of its unknowns • A parametric equation is an equation for which the solutions are sought as functions of some other variables, called parameters appearing in the equations • A functional equation is an equation in which the unknowns are functions rather than simple quantities • A differential equation is a functional equation involving derivatives of the unknown functions • An integral equation is a functional equation involving the antiderivatives of the unknown functions • An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions • A difference equation is an equation where the unknown is a function f which occurs in the equation through f(x), f(x−1), …, f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation

20.8 See also

• Equation (poem) • Expression • Five Equations That Changed the World: The Power and Poetry of Mathematics (book) • Formula • Formula editor • Functional equation • History of algebra • Inequality • Inequation • List of equations • List of scientific equations named after people • Term (logic) • Theory of equations

20.9 References

[1] . [2] “A statement of equality between two expressions. Equations are of two types, identities and conditional equations (or usually simply “equations”)". « Equation », in Mathematics Dictionary, Glenn James et Robert C. James (éd.), Van Nostrand, 1968, 3 ed. 1st ed. 1948, p. 131. [3] The subject of this article is basic in mathematics, and is treated in a lot of textbooks. Among them, Lay 2005, Meyer 2001, and Strang 2005 contain the material of this article. [4] Thomas, George B., and Finney, Ross L., Calculus and Analytic Geometry, Addison Wesley Publishing Co., fifth edition, 1979, p. 91. [5] Weisstein, Eric W. “Parametric Equations.” From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram. com/ParametricEquations.html 130 CHAPTER 20. EQUATION

20.10 External links

• Winplot: General Purpose plotter which can draw and animate 2D and 3D mathematical equations.

• Mathematical equation plotter: Plots 2D mathematical equations, computes integrals, and finds solutions on- line.

• Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). • EqWorld—contains information on solutions to many different classes of mathematical equations.

• fxSolver: Online formula database and graphing calculator for mathematics,natural science and engineering. • EquationSolver: A webpage that can solve single equations and linear equation systems.

• vCalc: A webpage with an extensive user modifiable equation library. Chapter 21

Euler’s four-square identity

In mathematics, Euler’s four-square identity says that the product of two numbers, each of which is a sum of four squares, is itself a sum of four squares. Specifically:

2 2 2 2 2 2 2 2 (a1 + a2 + a3 + a4)(b1 + b2 + b3 + b4) =

2 (a1b1 + a2b2 + a3b3 + a4b4) +

2 (a1b2 − a2b1 + a3b4 − a4b3) +

2 (a1b3 − a2b4 − a3b1 + a4b2) +

2 (a1b4 + a2b3 − a3b2 − a4b1) .

Euler wrote about this identity in a letter dated May 4, 1748 to Goldbach[1][2] (but he used a different sign convention from the above). It can be proven with elementary algebra and holds in every commutative ring. If the ak and bk are real numbers, a more elegant proof is available: the identity expresses the fact that the absolute value of the product of two quaternions is equal to the product of their absolute values, in the same way that the Brahmagupta–Fibonacci two-square identity does for complex numbers. The identity was used by Lagrange to prove his four square theorem. More specifically, it implies that it is sufficient to prove the theorem for prime numbers, after which the more general theorem follows. The sign convention used above corresponds to the signs obtained by multiplying two quaternions. Other sign conventions can be obtained by changing any ak to −ak , bk to −bk , or by changing the signs inside any of the squared terms on the right hand side. Hurwitz’s theorem states that an identity of form,

2 2 2 2 2 2 2 2 2 2 2 2 (a1 + a2 + a3 + ... + an)(b1 + b2 + b3 + ... + bn) = c1 + c2 + c3 + ... + cn where the ci are bilinear functions of the ai and bi is possible only for n = {1, 2, 4, 8}. However, the more general Pfister’s theorem allows that if the ci are just rational functions of one set of variables, hence has a denominator, then it is possible for all n = 2m .[3] Thus, a different kind of four-square identity can be given as,

2 2 2 2 2 2 2 2 (a1 + a2 + a3 + a4)(b1 + b2 + b3 + b4) =

131 132 CHAPTER 21. EULER’S FOUR-SQUARE IDENTITY

2 (a1b4 + a2b3 + a3b2 + a4b1) +

2 (a1b3 − a2b4 + a3b1 − a4b2) +

( )2 a3u1 − a4u2 a1b2 + a2b1 + 2 2 2 2 + b1 + b2 b1 + b2

( )2 − − a4u1 − a3u2 a1b1 a2b2 2 2 2 2 b1 + b2 b1 + b2 where,

2 − − 2 u1 = b1b4 2b1b2b3 b2b4 2 − 2 u2 = b1b3 + 2b1b2b4 b2b3 Note also the incidental fact that,

2 2 2 2 2 2 2 u1 + u2 = (b1 + b2) (b3 + b4)

21.1 See also

• Brahmagupta–Fibonacci identity (sums of two squares) • Degen’s eight-square identity

• Pfister’s sixteen-square identity • Latin square

21.2 References

[1] Leonhard Euler: Life, Work and Legacy, R.E. Bradley and C.E. Sandifer (eds), Elsevier, 2007, p. 193

[2] Mathematical Evolutions, A. Shenitzer and J. Stillwell (eds), Math. Assoc. America, 2002, p. 174

[3] Pfister’s Theorem on Sums of Squares, Keith Conrad, http://www.math.uconn.edu/~{}kconrad/blurbs/linmultialg/pfister. pdf

21.3 External links

• A Collection of Algebraic Identities • Lettre CXV from Euler to Goldbach Chapter 22

Extraneous and missing solutions

In mathematics, an extraneous solution represents a solution, such as that to an equation, that emerges from the process of solving the problem but is not a valid solution to the original problem. A missing solution is a solution that was a valid solution to the original problem, but disappeared during the process of solving the problem. Both are frequently the consequence of performing operations that are not invertible for some or all values of the variables, which disturbs the chain of logical implications in the proof.

22.1 Extraneous solutions: multiplication

One of the basic principles of algebra is that one can multiply both sides of an equation by the same expression without changing the equation’s solutions. However, strictly speaking, this is not true, in that multiplication by certain expressions may introduce new solutions that were not present before. For example, consider the following simple equation:

x + 2 = 0 If we multiply both sides by zero, we get:

0 = 0 This is true for all values of x, so the solution set is all real numbers. But clearly not all real numbers are solutions to the original equation. The problem is that multiplication by zero is not invertible: if we multiply by any nonzero value, we can undo it immediately by dividing by the same value, but division by zero is not allowed, so multiplication by zero cannot be undone. More subtly, suppose we take the same equation and multiply both sides by x. We get:

x(x + 2) = (0)x x2 + 2x = 0 This quadratic equation has two solutions, − 2 and 0. But if zero is substituted for x into the original equation, the result is the invalid equation 2 = 0. This counterintuitive result occurs because in the case where x=0, multiplying both sides by x multiplies both sides by zero, and so necessarily produces a true equation just as in the first example. In general, whenever we multiply both sides of an equation by an expression involving variables, we introduce ex- traneous solutions wherever that expression is equal to zero. But it’s not sufficient to exclude these values, because they may have been legitimate solutions to the original equation. For example, suppose we multiply both sides of our original equation x + 2 = 0 by x + 2. We get:

(x + 2)(x + 2) = 0(x + 2)

133 134 CHAPTER 22. EXTRANEOUS AND MISSING SOLUTIONS

x2 + 4x + 4 = 0 has only one real solution: x = −2, and this is a solution to the original equation, so it cannot be excluded, even though x + 2 is zero for this value of x.

22.2 Extraneous solutions: rational

Extraneous solutions can arise naturally in problems involving fractions with variables in the denominator. For ex- ample, consider this equation:

1 3 6x = − . x − 2 x + 2 (x − 2)(x + 2)

To begin solving, we multiply each side of the equation by the least common denominator of all the fractions contained in the equation. In this case, the LCD is (x − 2)(x + 2) . After performing these operations, the fractions are eliminated, and the equation becomes: x + 2 = 3(x − 2) − 6x .

Solving this yields the single solution x = −2. However, when we substitute the solution back into the original equation, we obtain:

1 3 6(−2) = − . −2 − 2 −2 + 2 (−2 − 2)(−2 + 2)

The equation then becomes:

1 3 12 = + . −4 0 0 This equation is not valid, since one cannot divide by zero. Because of this, the only effective way to deal with multiplication by expressions involving variables is to substitute each of the solutions obtained into the original equation and confirm that this yields a valid equation. After discarding solutions that yield an invalid equation, we will have the correct set of solutions. Note that in some cases all solutions may be discarded, in which case the original equation has no solution.

22.3 Missing solutions: division

Extraneous solutions are not too difficult to deal with because they just require checking all solutions for validity. However, more insidious are missing solutions, which can occur when performing operations on expressions that are invalid for certain values of those expressions. For example, if we were solving this equation, the correct solution is to subtract 4 from both sides, then divide both sides by 2:

2x + 4 = 0

2x = −4 x = −2 By analogy, we might suppose we can solve the following equation by subtracting 2x from both sides, then dividing by x: 22.4. OTHER OPERATIONS 135

x2 + 2x = 0 x2 = −2x x = −2 The solution x = −2 is in fact a valid solution to the original equation; but the other solution, x = 0, has disappeared. The problem is that we divided both sides by x, which is zero when x = 0. It’s generally possible (and advisable) to avoid dividing by any expression that can be zero; however, where this is necessary, it’s sufficient to ensure that any values of the variables that make it zero also fail to satisfy the original equation. For example, suppose we have this equation: x + 2 = 0

It is valid to divide both sides by x−2, obtaining the following equation: x + 2 = 0 x − 2 This is valid because the only value of x that makes x−2 equal to zero is x=2, and x=2 is not a solution to the original equation. In some cases we're not interested in certain solutions; for example, we may only want solutions where x is positive. In this case it’s okay to divide by an expression that is only zero when x is negative, because this can only remove solutions we don't care about.

22.4 Other operations

Multiplication and division are not the only operations that can modify the solution set. For example, take the problem: x2 = 4.

If we take the positive square root of both sides, we get: x = 2.

We're not taking the square root of any negative values here, since both x2 and 4 are necessarily positive. But we've lost the solution x = −2. The reason is that x is actually not in general the positive square root of x2. If x is negative, the positive square root of x2 is -x. If the step is taken correctly, it leads instead to the equation:

√ √ x2 = 4.

|x| = 2. x = 2. This equation has the same two solutions as the original one: x = 2, and x = −2.

22.5 See also

• Invalid proof Chapter 23

Factorization

This article is about the mathematical concept. For other uses, see Factor and Integer factorization. “Common factor” redirects here. For the greatest (or highest) common divisor, see Greatest common divisor. In mathematics, factorization (also factorisation in some forms of British English) or factoring is the decomposition

The polynomial x2 + cx + d, where a + b = c and ab = d, can be factorized into (x + a)(x + b).

of an object (for example, a number, a polynomial, or a matrix) into a product of other objects, or factors, which when multiplied together give the original. For example, the number 15 factors into primes as 3 × 5, and the polynomial x2 − 4 factors as (x − 2)(x + 2). In all cases, a product of simpler objects is obtained. The aim of factoring is usually to reduce something to “basic building blocks”, such as numbers to prime numbers, or polynomials to irreducible polynomials. Factoring integers is covered by the fundamental theorem of arithmetic and factoring polynomials by the fundamental theorem of algebra. Viète’s formulas relate the coefficients of a polynomial to its roots. The opposite of polynomial factorization is expansion, the multiplying together of polynomial factors to an “expanded” polynomial, written as just a sum of terms. Integer factorization for large integers appears to be a difficult problem. There is no known method to carry it out quickly. Its complexity is the basis of the assumed security of some public key cryptography algorithms, such as RSA. A matrix can also be factorized into a product of matrices of special types, for an application in which that form is convenient. One major example of this uses an orthogonal or unitary matrix, and a triangular matrix. There are different types: QR decomposition, LQ, QL, RQ, RZ. Another example is the factorization of a function as the composition of other functions having certain properties; for example, every function can be viewed as the composition of a surjective function with an injective function. This situation is generalized by factorization systems.

136 23.1. INTEGERS 137

23.1 Integers

Main article: Integer factorization

By the fundamental theorem of arithmetic, every positive integer greater than 1 has a unique prime factorization. Given an algorithm for integer factorization, one can factor any integer down to its constituent primes by repeated application of this algorithm.[1] For very large numbers, no efficient classical algorithm is known.

23.2 Polynomials

Main article: Factorization of polynomials

Modern techniques for factoring polynomials are fast and efficient, but use sophisticated mathematical ideas (see Factorization of polynomials). These techniques are used in the construction of computer routines for carrying out polynomial factorization in Computer algebra systems. The more classical hand techniques rely on either the polynomial to be factored having low degree or the recognition of the polynomial as belonging to a certain class of known examples and are not very suitable for computer implementation. This article is concerned with these classical techniques. While the general notion of factoring just means writing an expression as a product of simpler expressions, the vague term “simpler” will be defined more precisely for special classes of expressions. When factoring√ polynomials√ this means that the factors are to be polynomials of smaller degree. Thus, while x2 − y = (x + y)(x − y) is a factorization of the expression, it is not a polynomial factorization since the factors are not polynomials.[2] Also, the factoring of a constant term, as in 3x2 −6x+12 = 3(x2 −2x+4) would not be considered a polynomial factorization since one of the factors does not have a smaller degree than the original expression.[3] Another issue concerns the coefficients of the factors. In basic treatments it is desirable to have the coefficients of the factors be of the same type as the coefficients of the original polynomial, that is factoring polynomials with integer coefficients into factors with integer coefficients, or factoring polynomials with real coefficients into polynomials with real coefficients. It is not always possible to do this, and a polynomial that can not be factored in this way is said to be irreducible over this type of coefficient. Thus, x2 −2 is irreducible over the integers and x2 + 4 is irreducible over the reals.√ In the first√ example, the integers 1 and −2 can also be thought of as real numbers, and if they are, then x2 −2 = (x+ 2)(x− 2) shows that this polynomial factors over the reals (sometimes it is said that the polynomial splits over the reals). Similarly, since the integers 1 and 4 can be thought of as real and hence complex numbers, x2 + 4 splits over the complex numbers, i.e. x2 + 4 = (x + 2i)(x − 2i) . The fundamental theorem of algebra can be stated as: Every polynomial of degree n with complex number coefficients splits completely into n linear factors.[4] The terms in these factors, which are the roots of the polynomial, may be real or complex. Since complex roots of polynomials with real coefficients come in complex conjugate pairs, this result implies that every polynomial with real coefficients splits into linear and/or irreducible quadratic factors with real coefficients (because when two linear factors with complex conjugate terms are multiplied together, the result is a quadratic with real coefficients). Even though the structure of the factorization is known in these cases, finding the actual factors can be computationally challenging, and by the Abel-Ruffini theorem the coefficients and additive terms in the factors may not be expressible in terms of radicals.

23.2.1 General methods

There are only a few general methods that can be applied to any polynomial in either one variable (the univariate case) or several variables (the multivariate case).

Highest common factor

Finding, by inspection, the monomial that is the highest common factor (also called the greatest common divisor) of all the terms of the polynomial and factoring it out as a common factor is an application of the distributive law. This is the most commonly used factoring technique. For example:[5] 138 CHAPTER 23. FACTORIZATION

6x3y2 + 8x4y3 − 10x5y3 = (2x3y2)(3 + 4xy − 5x2y).

Factoring by grouping

A method that is sometimes useful, but not guaranteed to work, is factoring by grouping. Factoring by grouping is done by placing the terms in the polynomial into two or more groups, where each group can be factored by a known method. The results of these partial factorizations can sometimes be combined to give a factorization of the original expression. For example, to factor the polynomial

4x2 + 20x + 3yx + 15y

1. group similar terms, (4x2 + 20x) + (3yx + 15y),

2. factor out the highest common factor in each grouping, 4x(x + 5) + 3y(x + 5),

3. again factor out the binomial common factor, (x + 5)(4x + 3y).

While grouping may not lead to a factorization in general, if the polynomial expression to be factored consists of four terms and is the result of multiplying two binomial expressions (by the FOIL method for instance), then the grouping technique can lead to a factorization, as in the above example.

Using the factor theorem

Main article: Factor theorem

For a univariate polynomial, p(x), the factor theorem states that a is a root of the polynomial (that is, p(a) = 0, also called a zero of the polynomial) if and only if (x - a) is a factor of p(x). The other factor in such a factorization of p(x) can be obtained by polynomial long division or synthetic division. For example, consider the polynomial x3 − 3x + 2. By inspection we see that 1 is a root of this polynomial (observe that the coefficients add up to 0), so (x - 1) is a factor of the polynomial. By long division we have x3 − 3x + 2 = (x − 1)(x2 + x − 2).

Univariate case, using properties of the roots

When a univariate polynomial is completely factored into linear factors (degree one factors), all of the roots of the polynomial are visible and by multiplying the factors together again, the relationship between the roots and the coefficients can be observed. Formally, these relationships are known as Vieta’s formulas. These formulas do not help in factorizing the polynomial except as a guide to making good guesses at what possible roots may be. However, if some additional information about the roots is known, this can be combined with the formulas to obtain the roots and thus the factorization. [6] 3 2 For example, we can factor x − 5x − 16x + 80 if we know that the sum of two of its roots is zero. Let r1, r2 and r3 be the three roots of this polynomial. Then Vieta’s formulas are:

r1 + r2 + r3 = 5

r1r2 + r2r3 + r3r1 = −16

r1r2r3 = −80.

2 Assuming that r2 + r3 = 0 immediately gives r1 = 5 and reduces the other two equations to r2 = 16. Thus the roots are 5, 4 and −4 and we have x3 − 5x2 − 16x + 80 = (x − 5)(x − 4)(x + 4). 23.2. POLYNOMIALS 139

Finding rational roots If a (univariate) polynomial, f(x), has a rational root, p/q (p and q are integers and q ≠ 0), then by the factor theorem f(x) has the factor,

( ) p 1 x − = (qx − p). q q If, in addition, the polynomial f(x) has integer coefficients, then q must evenly divide the integer portion of the highest common factor of the terms of the polynomial, and, in the factorization of f(x), only the factor (qx - p) will be visible. If a (univariate) polynomial with integer coefficients, say,

n n−1 a0x + a1x + ... + an−1x + an has a rational root p/q, where p and q are integers that are relatively prime, then by the rational root test p is an integer [7] divisor of an and q is an integer divisor of a0. If we wished to factorize the polynomial 2x3 − 7x2 + 10x − 6 we could look for rational roots p/q where p divides −6, q divides 2 and p and q have no common factor greater than 1. By inspection we see that this polynomial can have no negative roots. Assume that q = 2 (otherwise we would be looking for integer roots), substitute x = p/2 and set the polynomial equal to 0. By dividing by 4, we obtain the polynomial equation p3 − 7p2 + 20p − 24 = 0 that will have an integer solution of 1 or 3 if the original polynomial had a rational root of the type we seek. Since 3 is a solution of this equation (and 1 is not), the original polynomial had the rational root 3/2 and the corresponding factor (2x - 3). By polynomial long division we have the factorization 2x3 − 7x2 + 10x − 6 = (2x − 3)(x2 − 2x + 2). For a quadratic polynomial with integer coefficients having rational roots, the above considerations lead to a fac- torization technique known as the ac method of factorization.[8] Suppose that the quadratic polynomial with integer coefficients is:

ax2 + bx + c

and it has rational roots, p/q and u/v. (If the discriminant, b2 − 4ac , is a square number these exist, otherwise we have irrational or complex solutions, and there will be no rational roots.) Both q and v must be divisors of a so we may write these fractions with a common denominator of a, that is, they may be written as -r/a and -s/a (the use of the negatives is cosmetic and leads to a prettier final result.) Then,

( ) ( ) b c 1 1 (ax + r)(ax + s) ax2 + bx + c = a x2 + x + = a (ax + r) (ax + s) = . a a a a a So, we have:

a2x2 + abx + ac = (ax + r)(ax + s), where rs = ac and r + s = b. The ac method for factoring the quadratic polynomial is to find r and s, the two factors of the number ac whose sum is b and then use them in the factorization formula of the original quadratic above. As an example consider the quadratic polynomial:

6x2 + 13x + 6.

Inspection of the factors of ac = 36 leads to 4 + 9 = 13 = b.

(6x + 4)(6x + 9) 6x2 + 13x + 6 = 6 2(3x + 2)(3)(2x + 3) = 6 = (3x + 2)(2x + 3) 140 CHAPTER 23. FACTORIZATION

23.2.2 Recognizable patterns

While taking the product of two (or more) expressions can be done by following a multiplication algorithm, the reverse process of factoring relies frequently on the recognition of a pattern in the expression to be factored and recalling how such a pattern arises. The following are some well known patterns.[9]

Difference of two squares

Main article: Difference of two squares

A common type of algebraic factoring is for the difference of two squares. It is the application of the formula a2 − b2 = (a + b)(a − b), to any two terms, whether or not they are perfect squares. This basic form is often used with more complicated expressions that may not a first look like the difference of two squares. For example, a2 + 2ab + b2 − x2 + 2xy − y2 = (a2 + 2ab + b2) − (x2 − 2xy + y2) = (a + b)2 − (x − y)2 = (a + b + x − y)(a + b − x + y).

Sum/difference of two cubes

A visual representation of the factorization of cubes using volumes. For a sum of cubes, simply substitute z=-y.

Another formula for factoring is for the sum or difference of two cubes. The sum can be factored by 23.2. POLYNOMIALS 141

a3 + b3 = (a + b)(a2 − ab + b2), and the difference by

a3 − b3 = (a − b)(a2 + ab + b2).

Difference of two fourth powers

Another formula is for the difference of two fourth powers, which is

a4 − b4 = (a2 + b2)(a2 − b2) = (a2 + b2)(a + b)(a − b).

Sum/difference of two nth powers

The above factorizations of differences or sums of powers can be extended to any positive integer power n. For any n, a general factorization is:

an − bn = (a − b)(an−1 + ban−2 + b2an−3 + ... + bn−2a + bn−1).

The corresponding formula for the sum of two nth powers depends on whether n is even or odd. If n is odd, b can be replaced by −b in the above formula, to give

an + bn = (a + b)(an−1 − ban−2 + b2an−3 − ... − bn−2a + bn−1).

If n is even, we consider two cases:

1. If n is a power of 2 then an + bn is unfactorable (more precisely, irreducible over the rational numbers).

2. Otherwise, n = m · 2k, m > 1, k > 0 where m is odd. In this case we have,

∑m k k k k k k k k k k k k k k an+bn = (a2 +b2 )(an−2 −an−2·2 b2 +an−3·2 b2·2 −...−a2 bn−2·2 +bn−2 ) = (a2 +b2 ) a(m−i)2 (−b2 )i−1. i=1 Specifically, for some small values of n we have:

a5 + b5 = (a + b)(a4 − a3b + a2b2 − ab3 + b4), a5 − b5 = (a − b)(a4 + a3b + a2b2 + ab3 + b4). a6 + b6 = (a2 + b2)(a4 − a2b2 + b4), a6 − b6 = (a3 + b3)(a3 − b3) = (a + b)(a − b)(a2 − ab + b2)(a2 + ab + b2). a7 + b7 = (a + b)(a6 − a5b + a4b2 − a3b3 + a2b4 − ab5 + b6), a7 − b7 = (a − b)(a6 + a5b + a4b2 + a3b3 + a2b4 + ab5 + b6). 142 CHAPTER 23. FACTORIZATION

Sum/difference of two nth powers over the field of the algebraic numbers

The above factorizations give factors with coefficients in the same field as those of the expression being factored—for example, a polynomial with rational coefficients (±1 in many cases above) is split into factors which themselves have rational coefficients. However, a factorization into factors with algebraic numbers as coefficients can yield lower- degree factors, as in the following formulas which can be proven by going through the complex conjugate roots of f(a) = an  bn : The sum of two terms that have equal even powers is factored by

n ( ) ∏ (2k − 1)π a2n + b2n = a2  2ab cos + b2 . 2n k=1 The difference of two terms that have equal even powers is factored by

n−1 ( ) ∏ kπ a2n − b2n = (a − b)(a + b) a2  2ab cos + b2 . n k=1 The sum or difference of two terms that have equal odd powers is factored by

n ( ) n ( ) ∏ 2kπ ∏ kπ a2n+1  b2n+1 = (a  b) a2  2ab cos + b2 = (a  b) a2  2ab(−1)k cos + b2 . 2n + 1 2n + 1 k=1 k=1 For instance, the sum or difference of two fifth powers is factored by

( √ )( √ ) 1 − 5 1 + 5 a5  b5 = (a  b) a2 ∓ ab + b2 a2 ∓ ab + b2 , 2 2 and the sum of two fourth powers is factored by

√ √ a4 + b4 = (a2 − 2ab + b2)(a2 + 2ab + b2).

Binomial expansions

The binomial theorem supplies patterns of coefficients that permit easily recognized factorizations when the polyno- mial is a power of a binomial expression. For example, the perfect square trinomials are the quadratic polynomials that can be factored as follows:

a2 + 2ab + b2 = (a + b)2, and

a2 − 2ab + b2 = (a − b)2. Some cubic polynomials are four term perfect cubes that can be factored as: a3 + 3a2b + 3ab2 + b3 = (a + b)3, and a3 − 3a2b + 3ab2 − b3 = (a − b)3. In general, the coefficients of the expanded polynomial (a + b)n are given by the n-th row of Pascal’s triangle. The coefficients of (a − b)n have the same absolute value but alternate in sign. 23.2. POLYNOMIALS 143 a b

a a² ab

b ab b²

A visual illustration of the identity (a + b)2 = a2 + 2ab + b2

Other factorization formulas

x2 + y2 + z2 + 2(xy + yz + xz) = (x + y + z)2 x3 + y3 + z3 − 3xyz = (x + y + z)(x2 + y2 + z2 − xy − xz − yz) x3 + y3 + z3 + 3x2(y + z) + 3y2(x + z) + 3z2(x + y) + 6xyz = (x + y + z)3 x4 + x2y2 + y4 = (x2 + xy + y2)(x2 − xy + y2).

23.2.3 Using formulas for polynomial roots

Any univariate quadratic polynomial (polynomials of the form ax2 +bx+c ) can be factored over the field of complex numbers using the quadratic formula, as follows: 144 CHAPTER 23. FACTORIZATION

( √ )( √ ) −b + b2 − 4ac −b − b2 − 4ac ax2 + bx + c = a(x − α)(x − β) = a x − x − , 2a 2a

where α and β are the two roots of the polynomial, either both real or both complex in the case where a, b, c are all real, found with the quadratic formula. The quadratic formula is valid for all polynomials with coefficients in any field (in particular, the real or complex numbers) except those that have characteristic two.[10] There are also formulas for cubic and quartic polynomials which can be used in the same way. However, there are no algebraic formulas in terms of the coefficients that apply to all univariate polynomials of a higher degree, by the Abel-Ruffini theorem.

23.2.4 Factoring over the complex numbers

Sum of two squares

If a and b represent real numbers, then the sum of their squares can be written as the product of complex numbers. This produces the factorization formula:

a2 + b2 = (a + bi)(a − bi).

For example, 4x2 + 49 can be factored into (2x + 7i)(2x − 7i) .

23.3 Matrices

Main article: Matrix decomposition

23.4 Unique factorization domains

Main article: Unique factorization domain

23.4.1 Euclidean domains

Main article: Euclidean domain

23.5 See also

• Completing the square

• Euler’s factorization method

• Fermat’s factorization method

• Integer factorization

• Monoid factorisation

• Multiplicative partition 23.6. NOTES 145

• Partition (number theory) - A way of writing a number as a sum of positive integers.

• Prime factor • Program synthesis

• Table of Gaussian integer factorizations

23.6 Notes

[1] Hardy; Wright (1980). An Introduction to the Theory of Numbers (5th ed.). Oxford Science Publications. ISBN 978- 0198531715.

[2] Fite 1921, p. 20

[3] Even if the 3 is thought of as a constant polynomial so that this could be considered a factorization into polynomials.

[4] Klein 1925, pp. 101-102

[5] Fite 1921, p. 19

[6] Burnside & Panton 1960, p. 38

[7] Dickson 1922, p. 27

[8] Stover, Christopher AC Method - Mathworld

[9] Selby 1970, p. 101

[10] In these fields 2 = 0 so the division in the formula is not valid. There are other ways to find roots of quadratic equations over these fields.

23.7 References

• Burnside, William Snow; Panton, Arthur William (1960) [1912], The Theory of Equations with an introduction to the theory of binary algebraic forms (Volume one), Dover

• Dickson, Leonard Eugene (1922), First Course in the Theory of Equations, New York: John Wiley & Sons • Fite, William Benjamin (1921), College Algebra (Revised), Boston: D. C. Heath & Co.

• Klein, Felix (1925), Elementary Mathematics from an Advanced Standpoint; Arithmetic, Algebra, Analysis, Dover

• Selby, Samuel M., CRC Standard Mathematical Tables (18th ed.), The Chemical Rubber Co.

23.8 External links

• Hazewinkel, Michiel, ed. (2001), “Factorization of polynomials”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 • One hundred million numbers factored on html pages.

• WIMS Factoris is an online factorization tool. • Wolfram Alpha can factorize too. Chapter 24

FOIL method

This article is about a mnemonic. For other types of foil, see Foil (disambiguation). In elementary algebra, FOIL is a mnemonic for the standard method of multiplying two binomials—hence the method may be referred to as the FOIL method. The word FOIL is an acronym for the four terms of the product:

• First (“first” terms of each binomial are multiplied together) • Outer (“outside” terms are multiplied—that is, the first term of the first binomial and the second term of the second) • Inner (“inside” terms are multiplied—second term of the first binomial and first term of the second) • Last (“last” terms of each binomial are multiplied)

The general form is:

(a + b)(c + d) = |{z}ac + |{z}ad + |{z}bc + |{z}bd first outside inside last Note that a is both a “first” term and an “outer” term; b is both a “last” and “inner” term, and so forth. The order of the four terms in the sum is not important, and need not match the order of the letters in the word FOIL. The FOIL method is a special case of a more general method for multiplying algebraic expressions using the distributive law. The word FOIL was originally intended solely as a mnemonic for high-school students learning algebra, but many students and educators in the United States now use the word “foil” as a verb meaning “to expand the product of two binomials”. This neologism has not gained widespread acceptance in the mathematical community.

24.1 Examples

The FOIL method is most commonly used to multiply linear binomials. For example,

(x + 3)(x + 5) = x · x + x · 5 + 3 · x + 3 · 5 = x2 + 5x + 3x + 15 = x2 + 8x + 15

If either binomial involves subtraction, the corresponding terms must be negated. For example,

(2x − 3)(3x − 4) = (2x)(3x) + (2x)(−4) + (−3)(3x) + (−3)(−4) = 6x2 − 8x − 9x + 12 = 6x2 − 17x + 12

146 24.2. THE DISTRIBUTIVE LAW 147

24.2 The distributive law

The FOIL method is equivalent to a two-step process involving the distributive law:

(a + b)(c + d) = a(c + d) + b(c + d) = ac + ad + bc + bd

In the first step, the (c + d) is distributed over the addition in first binomial. In the second step, the distributive law is used to simplify each of the two terms. Note that this process involves a total of three applications of the distributive property.

24.3 Reverse FOIL

The FOIL rule converts a product of two binomials into a sum of four (or fewer, if like terms are then combined) monomials. The reverse process is called factoring or factorization. In particular, if the proof above is read in reverse it illustrates the technique called factoring by grouping.

24.4 Table as an alternative to FOIL

A visual memory tool can replace the FOIL mnemonic for a pair of polynomials with any number of terms. Make a table with the terms of the first polynomial on the left edge and the terms of the second on the top edge, then fill in the table with products. The table equivalent to the FOIL rule looks like this.

× c d a ac ad b bc bd

In the case that these are polynomials, (ax + b)(cx + d), the terms of a given degree are found by adding along the antidiagonals

× cx d ax acx2 adx b bcx bd so (ax + b)(cx + d) = acx2 + (ad + bc)x + bd. To multiply (a+b+c)(w+x+y+z), the table would be as follows.

× w x y z a aw ax ay az b bw bx by bz c cw cx cy cz

The sum of the table entries is the product of the polynomials. Thus

(a + b + c)(w + x + y + z) = aw + ax + ay + az + bw + bx + by + bz + cw + cx + cy + cz.

Similarly, to multiply (ax2 + bx + c)(dx3 + ex2 + fx + g), one writes the same table 148 CHAPTER 24. FOIL METHOD

× d e f g a ad ae af ag b bd be bf bg c cd ce cf cg

and sums along antidiagonals:

(ax2 + bx + c)(dx3 + ex2 + fx + g) = adx5 + (ae + bd)x4 + (af + be + cd)x3 + (ag + bf + ce)x2 + (bg + cf)x + cg.

24.5 Generalizations

The FOIL rule cannot be directly applied to expanding products with more than two multiplicands, or multiplicands with more than two summands. However, applying the associative law and recursive foiling allows one to expand such products. For instance,

(a + b + c + d)(x + y + z + w) = ((a + b) + (c + d))((x + y) + (z + w)) = (a + b)(x + y) + (a + b)(z + w) + (c + d)(x + y) + (c + d)(z + w) = ax + ay + bx + by + az + aw + bz + bw + cx + cy + dx + dy + cz + cw + dz + dw.

Alternate methods based on distributing forgo the use of the FOIL rule, but may be easier to remember and apply. For example,

(a + b + c + d)(x + y + z + w) = (a + (b + c + d))(x + y + z + w) = a(x + y + z + w) + (b + c + d)(x + y + z + w) = a(x + y + z + w) + (b + (c + d))(x + y + z + w) = a(x + y + z + w) + b(x + y + z + w) + (c + d)(x + y + z + w) = a(x + y + z + w) + b(x + y + z + w) + c(x + y + z + w) + d(x + y + z + w) = ax + ay + az + aw + bx + by + bz + bw + cx + cy + cz + cw + dx + dy + dz + dw.

24.6 See also

• Binomial theorem

• Factorization

24.7 Further reading

• Steege, Ray; Bailey, Kerry (1997), Schaum’s Outline of Theory and Problems of Intermediate Algebra, Schaum’s Outline Series, New York: McGraw–Hill, p. 54, ISBN 978-0-07-060839-9 24.7. FURTHER READING 149

A visual representation of the FOIL rule. Each colored line represents two terms that must be multiplied. Chapter 25

Geometry of roots of real polynomials

Polynomial of degree 5: ƒ(x) = 1/20 (x + 4)(x + 2)· ·(x + 1)(x − 1)(x − 3) + 2 Graphical methods provide a means of determining or approximating the roots of a polynomial—the values that make the polynomial equal to zero.[1] Practical tools for performing these include graph paper, graphical calculators and computer graphics.[2] The fundamental theorem of algebra states that a nth-degree polynomial with complex coefficients (including real coefficients) has n complex roots (not necessarily real even if the coefficients are real), although its roots may not all be different from each other. If the polynomial has real coefficients, its roots are either real, or else occur as complex conjugates. Suppose a polynomial P(x) is graphed as y = P(x). At a real root, the graph of the polynomial crosses the x-axis. Thus, the real roots of a polynomial can be demonstrated graphically.[3] For some kinds of polynomials, all the roots, including the complex roots, can be found graphically. Polynomial equations up to the fifth degree may be solved graphically.[4][5][6]

150 25.1. COMPLEX ROOTS OF QUADRATIC POLYNOMIALS 151

The geometrical methods of ruler and compass may be used to solve any linear or quadratic equation. Descartes showed that the constructions of Euclid were equivalent to the algebraic solution of quadratics.[7] Cubic equations may be solved by solid geometry. Archimedes' work On the Sphere and the Cylinder provided solutions of some cubics and Omar Khayyam systematised this to provide geometrical solutions of all quadratics and cubics.[8]

25.1 Complex roots of quadratic polynomials

For polynomials with real coefficients, a local minimum point above the x-axis or a local maximum point below the x-axis indicates the existence of two non-real complex roots, which are each other’s complex conjugates.[9] The converse, however, is not true; for example, the cubic polynomial x3 + x has two complex roots, but its graph has no local minima or maxima. The simplest such case involves parabolas.

A second-degree polynomial function whose x intercepts are not real: y = (x − 5)2 + 9. The “3” is the imaginary part of the x-intercept. The real part is the x-coordinate of the vertex. Thus the roots are 5 ± 3i.

If a parabola has a global minimum point above the x-axis, or a global maximum point below the x-axis, then its x intercepts are not real. For 152 CHAPTER 25. GEOMETRY OF ROOTS OF REAL POLYNOMIALS

y = a(x − h)2 + k, if a and k are positive, then the roots are non-real complex numbers. The number k is then the height of the vertex above the x-axis. In the example in the illustration, we have k = 9. Suppose one goes k units in the opposite direction from the vertex, i.e. away from the x-axis, then horizontally as far as it takes to reach the curve (in the example, that distance is 3. The horizontal distance from that point to the curve is the absolute value of the imaginary part of the root.[10] The x-coordinate of the vertex is the real part.[10] Thus, in the example, the roots are[10]

5  3i.

This method is specific to quadratics and does not generalise to higher-degree polynomial equations.

25.2 References

[1] James A. Ward (April 1937), “Graphical Representation of Complex Roots”, National Mathematics Magazine 11 (7): 297–303, doi:10.2307/3028785

[2] Robert Kowalski, Helen Skala (1990), “Determining Roots of Complex Functions with Computer Graphics”, Coll. Micro. VIII (1): 51–54

[3] Sudhir Kumar Goel, Denise T. Reid. (December 2001), “A graphical approach to understanding the fundamental theorem of algebra”, Mathematics Teacher 94 (9): 749

[4] Henriquez, Garcia, “The graphical interpretation of the complex roots of cubic equations,” American Mathematical Monthly 42(6), June–July 1935, 383-384.

[5] George A. Yanosik (January 1943), “Graphical Solutions for Complex Roots of Quadratics, Cubics and Quartics”, National Mathematics Magazine 17 (4): 147–150, doi:10.2307/3028335

[6] T.W.Chaundy (1934), “On the number of real roots of a quintic equation”, The Quarterly Journal of Mathematics (Oxford University Press) os–5: 10–22, doi:10.1093/qmath/os-5.1.10

[7] Robin Hartshorne (2000), Geometry: Euclid and beyond, pp. 120 et seq., ISBN 978-0-387-98650-0

[8] John N. Crossley (1987), “ch. III Latency”, The emergence of number, ISBN 978-9971-5-0414-4

[9] Stedall, Jacqueline A. (2011), From Cardano’s Great Art to Lagrange’s Reflections: Filling a Gap in the History of Algebra, Heritage of European mathematics, European Mathematical Society, p. 97, ISBN 9783037190920.

[10] Alec Norton, Benjamin Lotto (June 1984), “Complex Roots Made Visible”, The College Mathematics Journal 15 (3): 248–249, doi:10.2307/2686333

25.3 External links

• Complex roots made visible at Mudd Math Fun Facts

• Connecting complex roots to a parabola’s graph at ON-Math • Complex Roots at Dr. Math. With comments by John Conway Chapter 26

Identity (mathematics)

Not to be confused with Identity element or Identity function. In mathematics an identity is an equality relation A = B, such that A and B contain some variables and A and B

Visual proof of the Pythagorean identity. For any angle θ, The point (cos(θ),sin(θ)) lies on the unit circle, which satisfies the equation x2+y2=1. Thus, cos2(θ)+sin2(θ)=1. produce the same value as each other regardless of what values (usually numbers) are substituted for the variables. In other words, A = B is an identity if A and B define the same functions. This means that an identity is an equality

153 154 CHAPTER 26. IDENTITY (MATHEMATICS)

between functions that are differently defined. For example (a + b)2 = a2 + 2ab + b2 and cos2(x) + sin2(x) = 1 are identities. Identities are sometimes indicated by the triple bar symbol ≡ instead of =, the equals sign.[1]

26.1 Common identities

26.1.1 Trigonometric identities

Main article: List of trigonometric identities

Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified. An im- portant application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric iden- tity. One example is : sin2 θ + cos2 θ ≡ 1 which is true for all complex values of θ (since the complex numbers C are the domain of sin and cos), as opposed to

cos θ = 1, which is true only for some values of θ , not all. For example, the latter equation is true when θ = 0, false when θ = 2 .

26.1.2 Exponential identities

Main article: Exponentiation

The following identities hold for all integer exponents, provided that the base is non-zero:

bm+n = bm · bn (bm)n = bm·n (b · c)n = bn · cn

Exponentiation is not commutative. This contrasts with addition and multiplication, which are. For example, 2 + 3 = 3 + 2 = 5 and 2 · 3 = 3 · 2 = 6, but 23 = 8, whereas 32 = 9. Exponentiation is not associative either. Addition and multiplication are. For example, (2 + 3) + 4 = 2 + (3 + 4) = 9 and (2 · 3) · 4 = 2 · (3 · 4) = 24, but 23 to the 4 is 84 or 4,096, whereas 2 to the 34 is 281 or 2,417,851,639,229,258,349,412,352. Without parentheses to modify the order of calculation, by convention the order is top-down, not bottom-up:

q q bp = b(p ) ≠ (bp)q = b(p·q) = bp·q.

26.1.3 Logarithmic identities

Main article: Logarithmic identities

Several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another.[2] 26.2. SEE ALSO 155

Product, quotient, power and root

The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the p-th power of a number is p times the logarithm of the number itself; the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions x = blog(x), and/or y = blog(y), in the left hand sides.

Change of base

The logarithm logb(x) can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula:

log (x) log (x) = k . b logk(b)

Typical scientific calculators calculate the logarithms to bases 10 and e.[3] Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula:

log10(x) loge(x) logb(x) = = . log10(b) loge(b) Given a number x and its logarithm logb(x) to an unknown base b, the base is given by:

1 b = x logb(x) .

26.1.4 Hyperbolic function identities

Main article: Hyperbolic function

The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn’s rule[4] states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of 2, 6, 10, 14, ... sinhs.[5] The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers.

26.2 See also

• Accounting identity

• List of mathematical identities

26.3 References

[1] Weiner, Joan (2004).Frege Explained. Open Court.

[2] All statements in this section can be found in Shailesh Shirali 2002, section 4, (Douglas Downing 2003, p. 275), or Kate & Bhapkar 2009, p. 1-1, for example. 156 CHAPTER 26. IDENTITY (MATHEMATICS)

[3] Bernstein, Stephen; Bernstein, Ruth (1999), Schaum’s outline of theory and problems of elements of statistics. I, Descriptive statistics and probability, Schaum’s outline series, New York: McGraw-Hill, ISBN 978-0-07-005023-5, p. 21

[4] G. Osborn, Mnemonic for hyperbolic formulae, The Mathematical Gazette, p. 189, volume 2, issue 34, July 1902

[5] Peterson, John Charles (2003). Technical mathematics with calculus (3rd ed.). Cengage Learning. p. 1155. ISBN 0-7668- 6189-9., Chapter 26, page 1155

26.4 External links

• A Collection of Algebraic Identities Chapter 27

Inequality (mathematics)

Not to be confused with Inequation. “Less than”, “Greater than”, and “More than” redirect here. For the use of the "<" and ">" signs as punctuation, see Bracket. For the UK insurance brand “More Th>n”, see RSA Insurance Group.

In mathematics, an inequality is a relation that holds between two values when they are different (see also: equality).

• The notation a ≠ b means that a is not equal to b.

It does not say that one is greater than the other, or even that they can be compared in size. If the values in question are elements of an ordered set, such as the integers or the real numbers, they can be compared in size.

• The notation a < b means that a is less than b.

• The notation a > b means that a is greater than b.

In either case, a is not equal to b. These relations are known as strict inequalities. The notation a < b may also be read as "a is strictly less than b". In contrast to strict inequalities, there are two types of inequality relations that are not strict:

• The notation a ≤ b means that a is less than or equal to b (or, equivalently, not greater than b, or at most b).

• The notation a ≥ b means that a is greater than or equal to b (or, equivalently, not less than b, or at least b).

An additional use of the notation is to show that one quantity is much greater than another, normally by several orders of magnitude.

• The notation a ≪ b means that a is much less than b. (In measure theory, however, this notation is used for absolute continuity, an unrelated concept.)

• The notation a ≫ b means that a is much greater than b.

27.1 Properties

Inequalities are governed by the following properties. All of these properties also hold if all of the non-strict inequali- ties (≤ and ≥) are replaced by their corresponding strict inequalities (< and >) and (in the case of applying a function) monotonic functions are limited to strictly monotonic functions.

157 158 CHAPTER 27. INEQUALITY (MATHEMATICS)

x1

feasible region

x2

The feasible regions of linear programming are defined by a set of inequalities.

27.1.1 Transitivity

The transitive property of inequality states:

• For any real numbers a, b, c: • If a ≥ b and b ≥ c, then a ≥ c. • If a ≤ b and b ≤ c, then a ≤ c. • If either of the premises is a strict inequality, then the conclusion is a strict inequality. • E.g. if a ≥ b and b > c, then a > c • An equality is of course a special case of a non-strict inequality. • E.g. if a = b and b > c, then a > c

27.1.2 Converse

The relations ≤ and ≥ are each other’s converse: 27.1. PROPERTIES 159

• For any real numbers a and b: • If a ≤ b, then b ≥ a. • If a ≥ b, then b ≤ a.

27.1.3 Addition and subtraction

a

x < y a+x < a+y

If x < y, then x + a < y + a.

A common constant c may be added to or subtracted from both sides of an inequality:

• For any real numbers a, b, c • If a ≤ b, then a + c ≤ b + c and a − c ≤ b − c. • If a ≥ b, then a + c ≥ b + c and a − c ≥ b − c. i.e., the real numbers are an ordered group under addition.

27.1.4 Multiplication and division

0 1 x y<

0 a ax < ay

If x < y and a > 0, then ax < ay.

The properties that deal with multiplication and division state: 160 CHAPTER 27. INEQUALITY (MATHEMATICS)

0 1 x y<

a ay < ax 0

If x < y and a < 0, then ax > ay.

• For any real numbers, a, b and non-zero c:

• If c is positive, then multiplying or dividing by c does not change the inequality: • If a ≥ b and c > 0, then ac ≥ bc and a/c ≥ b/c. • If a ≤ b and c > 0, then ac ≤ bc and a/c ≤ b/c. • If c is negative, then multiplying or dividing by c inverts the inequality: • If a ≥ b and c < 0, then ac ≤ bc and a/c ≤ b/c. • If a ≤ b and c < 0, then ac ≥ bc and a/c ≥ b/c.

More generally, this applies for an ordered field, see below.

27.1.5 Additive inverse

The properties for the additive inverse state:

• For any real numbers a and b, negation inverts the inequality:

• If a ≤ b, then −a ≥ −b. • If a ≥ b, then −a ≤ −b.

27.1.6 Multiplicative inverse

The properties for the multiplicative inverse state:

• For any non-zero real numbers a and b that are both positive or both negative:

• If a ≤ b, then 1/a ≥ 1/b. • If a ≥ b, then 1/a ≤ 1/b.

• If one of a and b is positive and the other is negative, then:

• If a < b, then 1/a < 1/b. • If a > b, then 1/a > 1/b.

These can also be written in chained notation as: 27.1. PROPERTIES 161

• For any non-zero real numbers a and b: • If 0 < a ≤ b, then 1/a ≥ 1/b > 0. • If a ≤ b < 0, then 0 > 1/a ≥ 1/b. • If a < 0 < b, then 1/a < 0 < 1/b. • If 0 > a ≥ b, then 1/a ≤ 1/b < 0. • If a ≥ b > 0, then 0 < 1/a ≤ 1/b. • If a > 0 > b, then 1/a > 0 > 1/b.

27.1.7 Applying a function to both sides

2

2 4 6 8

- 2

- 4

- 6

The graph of y = ln x

Any monotonically increasing function may be applied to both sides of an inequality (provided they are in the domain of that function) and it will still hold. Applying a monotonically decreasing function to both sides of an inequality means the opposite inequality now holds. The rules for the additive inverse, and the multiplicative inverse for positive numbers, are both examples of applying a monotonically decreasing function. If the inequality is strict (a < b, a > b) and the function is strictly monotonic, then the inequality remains strict. If only one of these conditions is strict, then the resultant inequality is non-strict. The rules for additive and multiplicative inverses are both examples of applying a strictly monotonically decreasing function. 162 CHAPTER 27. INEQUALITY (MATHEMATICS)

As an example, consider the application of the natural logarithm to both sides of an inequality when a and b are positive real numbers:

a ≤ b ⇔ ln(a) ≤ ln(b). a < b ⇔ ln(a) < ln(b).

This is true because the natural logarithm is a strictly increasing function.

27.2 Ordered fields

If (F, +, ×) is a field and ≤ is a total order on F, then (F, +, ×, ≤) is called an ordered field if and only if:

• a ≤ b implies a + c ≤ b + c; • 0 ≤ a and 0 ≤ b implies 0 ≤ a × b.

Note that both (Q, +, ×, ≤) and (R, +, ×, ≤) are ordered fields, but ≤ cannot be defined in order to make (C, +, ×, ≤) an ordered field, because −1 is the square of i and would therefore be positive. The non-strict inequalities ≤ and ≥ on real numbers are total orders. The strict inequalities < and > on real numbers are strict total orders.

27.3 Chained notation

The notation a < b < c stands for "a < b and b < c", from which, by the transitivity property above, it also follows that a < c. Obviously, by the above laws, one can add/subtract the same number to all three terms, or multiply/divide all three terms by same nonzero number and reverse all inequalities according to sign. Hence, for example, a < b + e < c is equivalent to a − e < b < c − e.

This notation can be generalized to any number of terms: for instance, a1 ≤ a2 ≤ ... ≤ an means that ai ≤ ai₊₁ for i = 1, 2, ..., n − 1. By transitivity, this condition is equivalent to ai ≤ aj for any 1 ≤ i ≤ j ≤ n. When solving inequalities using chained notation, it is possible and sometimes necessary to evaluate the terms inde- pendently. For instance to solve the inequality 4x < 2x + 1 ≤ 3x + 2, it is not possible to isolate x in any one part of the inequality through addition or subtraction. Instead, the inequalities must be solved independently, yielding x < 1/2 and x ≥ −1 respectively, which can be combined into the final solution −1 ≤ x < 1/2. Occasionally, chained notation is used with inequalities in different directions, in which case the meaning is the logical conjunction of the inequalities between adjacent terms. For instance, a < b = c ≤ d means that a < b, b = c, and c ≤ d. This notation exists in a few programming languages such as Python.

27.4 Inequalities between means

See also: Inequality of arithmetic and geometric means

There are many inequalities between means. For example, for any positive numbers a1, a2, …, an we have H ≤ G ≤ A ≤ Q, where

27.5 Power inequalities

A"power inequality" is an inequality containing ab terms, where a and b are real positive numbers or variable expressions. They often appear in mathematical olympiads exercises. 27.5. POWER INEQUALITIES 163

27.5.1 Examples

• For any real x,

ex ≥ 1 + x.

• If x > 0, then

( ) 1 1/e xx ≥ . e

• If x ≥ 1, then

x xx ≥ x.

• If x, y, z > 0, then

(x + y)z + (x + z)y + (y + z)x > 2.

• For any real distinct numbers a and b,

eb − ea > e(a+b)/2. b − a

• If x, y > 0 and 0 < p < 1, then

(x + y)p < xp + yp.

• If x, y, z > 0, then

xxyyzz ≥ (xyz)(x+y+z)/3.

• If a, b > 0, then

aa + bb ≥ ab + ba.

This inequality was solved by I.Ilani in JSTOR,AMM,Vol.97,No.1,1990.

• If a, b > 0, then 164 CHAPTER 27. INEQUALITY (MATHEMATICS)

aea + beb ≥ aeb + bea. This inequality was solved by S.Manyama in AJMAA,Vol.7,Issue 2,No.1,2010 and by V.Cirtoaje in JNSA,Vol.4,Issue 2,130-137,2011.

• If a, b, c > 0, then

a2a + b2b + c2c ≥ a2b + b2c + c2a.

• If a, b > 0, then

ab + ba > 1.

This result was generalized by R. Ozols in 2002 who proved that if a1, ..., an > 0, then

a2 a3 ··· a1 a1 + a2 + + an > 1 (result is published in Latvian popular-scientific quarterly The Starry Sky, see references).

27.6 Well-known inequalities

See also: List of inequalities

Mathematicians often use inequalities to bound quantities for which exact formulas cannot be computed easily. Some inequalities are used so often that they have names:

• Azuma’s inequality • Bernoulli’s inequality • Boole’s inequality • Cauchy–Schwarz inequality • Chebyshev’s inequality • Chernoff’s inequality • Cramér–Rao inequality • Hoeffding’s inequality • Hölder’s inequality • Inequality of arithmetic and geometric means • Jensen’s inequality • Kolmogorov’s inequality • Markov’s inequality • Minkowski inequality • Nesbitt’s inequality • Pedoe’s inequality • Poincaré inequality • Samuelson’s inequality • Triangle inequality 27.7. COMPLEX NUMBERS AND INEQUALITIES 165

27.7 Complex numbers and inequalities

The set of complex numbers C with its operations of addition and multiplication is a field, but it is impossible to define any relation ≤ so that (C, +, ×, ≤) becomes an ordered field. To make (C, +, ×, ≤) an ordered field, it would have to satisfy the following two properties:

• if a ≤ b then a + c ≤ b + c

• if 0 ≤ a and 0 ≤ b then 0 ≤ a b

Because ≤ is a total order, for any number a, either 0 ≤ a or a ≤ 0 (in which case the first property above implies that 0 ≤ −a ). In either case 0 ≤ a2; this means that i2 > 0 and 12 > 0 ; so −1 > 0 and 1 > 0 , which means (−1 + 1) > 0 ; contradiction. However, an operation ≤ can be defined so as to satisfy only the first property (namely, “if a ≤ b then a + c ≤ b + c"). Sometimes the lexicographical order definition is used:

• a ≤ b if Re(a) < Re(b) or ( Re(a) = Re(b) and Im(a) ≤ Im(b) )

It can easily be proven that for this definition a ≤ b implies a + c ≤ b + c.

27.8 Vector inequalities

Inequality relationships similar to those defined above can also be defined for column vector. If we let the vectors n T T x, y ∈ R (meaning that x = (x1, x2, . . . , xn) and y = (y1, y2, . . . , yn) where xi and yi are real numbers for i = 1, . . . , n ), we can define the following relationships.

• x = y if xi = yi for i = 1, . . . , n

• x < y if xi < yi for i = 1, . . . , n

• x ≤ y if xi ≤ yi for i = 1, . . . , n and x ≠ y

• x ≦ y if xi ≤ yi for i = 1, . . . , n

Similarly, we can define relationships for x > y , x ≥ y , and x ≧ y . We note that this notation is consistent with that used by Matthias Ehrgott in Multicriteria Optimization (see References). The property of Trichotomy (as stated above) is not valid for vector relationships. For example, when x = [2, 5]T and y = [3, 4]T , there exists no valid inequality relationship between these two vectors. Also, a multiplicative inverse would need to be defined on a vector before this property could be considered. However, for the rest of the aforementioned properties, a parallel property for vector inequalities exists.

27.9 General Existence Theorems

For a general system of polynomial inequalities, one can find a condition for a solution to exist. Firstly, any system of polynomial inequalities can be reduced to a system of quadratic inequalities by increasing the number of variables and equations (for example by setting a square of a variable equal to a new variable). A single quadratic polynomial inequality in n−1 variables can be written as:

XT AX ≥ 0

where X is a vector of the variables X = (x, y, z, ...., 1)T and A is a matrix. This has a solution, for example, when there is at least one positive element on the main diagonal of A. 166 CHAPTER 27. INEQUALITY (MATHEMATICS)

Systems of inequalities can be written in terms of matrices A, B, C, etc. and the conditions for existence of solutions can be written as complicated expressions in terms of these matrices. The solution for two polynomial inequalities in two variables tells us whether two conic section regions overlap or are inside each other. The general solution is not known but such a solution could be theoretically used to solve such unsolved problems as the kissing number problem. However, the conditions would be so complicated as to require a great deal of computing time or clever algorithms.

27.10 See also

• Binary relation

• Bracket (mathematics), for the use of similar ‹ and › signs as brackets

• Fourier-Motzkin elimination

• Inclusion (set theory)

• Inequation

• Interval (mathematics)

• List of inequalities

• List of triangle inequalities

• Partially ordered set

• Relational operators, used in programming languages to denote inequality

27.11 Notes

27.12 References

• Hardy, G., Littlewood J.E., Pólya, G. (1999). Inequalities. Cambridge Mathematical Library, Cambridge University Press. ISBN 0-521-05206-8.

• Beckenbach, E.F., Bellman, R. (1975). An Introduction to Inequalities. Random House Inc. ISBN 0-394- 01559-2.

• Drachman, Byron C., Cloud, Michael J. (1998). Inequalities: With Applications to Engineering. Springer- Verlag. ISBN 0-387-98404-6.

• Murray S. Klamkin. ""Quickie” inequalities” (PDF). Math Strategies.

• Arthur Lohwater (1982). “Introduction to Inequalities”. Online e-book in PDF format.

• Harold Shapiro (2005,1972–1985). “Mathematical Problem Solving”. The Old Problem Seminar. Kungliga Tekniska högskolan. Check date values in: |date= (help)

• “3rd USAMO”. Archived from the original on 2008-02-03.

• Pachpatte, B.G. (2005). Mathematical Inequalities. North-Holland Mathematical Library 67 (first ed.). Ams- terdam, The Netherlands: Elsevier. ISBN 0-444-51795-2. ISSN 0924-6509. MR 2147066. Zbl 1091.26008.

• Ehrgott, Matthias (2005). Multicriteria Optimization. Springer-Berlin. ISBN 3-540-21398-8.

• Steele, J. Michael (2004). The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities. Cambridge University Press. ISBN 978-0-521-54677-5. 27.13. EXTERNAL LINKS 167

27.13 External links

• Hazewinkel, Michiel, ed. (2001), “Inequality”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608- 010-4 • Graph of Inequalities by Ed Pegg, Jr., Wolfram Demonstrations Project.

• AoPS Wiki entry about Inequalities Chapter 28

Inequation

In mathematics, an inequation is a statement that an inequality holds between two values.[1] It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between them indicating the specific inequality relation. Some examples of inequations are:

a < b,

x + y + z ≤ 1, n > 1, x ≠ 0. Some authors apply the term only to inequations in which the inequality relation is, specifically, not-equal-to (≠).[2]

28.1 Chains of inequations

A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain

0 ≤ a < b ≤ 1

is shorthand for

0 ≤ a and a < b and b ≤ 1.

28.2 Solving inequations

Similar to equation solving, inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, mathematical expressions. A solution of the equation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, the inequations become true propositions. Often, an additional objective expression is given that is to be minimized by an optimal solution.

For example, 0 ≤ x1 ≤ 690 − 1.5 · x2 ∧ 0 ≤ x2 ≤ 530 − x1 ∧ x1 ≤ 640 − 0.75 · x2 is a conjunction of inequations, partly written as chains; the set of its solutions is shown in blue in the picture (the red, green, and orange

168 28.3. SPECIAL 169

x1

feasible region

x2

Solution set for example inequations

line corresponding to the 1st, 2nd, and 3rd conjunct, respectively). See Linear programming#Example for a larger example. Computer support in solving inequations is described in constraint programming; in particular, the simplex algorithm finds optimal solutions of linear inequations. The programming language Prolog III supports solving algorithms for particular classes of inequalities (and other relations) as a basic language feature, see constraint logic programming.

28.3 Special

√ f(x) < g(x) f(x) ≥ 0 ⇔ g(x) > 0  f(x) < [g(x)]2 170 CHAPTER 28. INEQUATION

28.4 See also

• Equation

• Equals sign • Inequality (mathematics)

• Relational operator

28.5 References

[1] Thomas H. Sidebotham (2002). The A to Z of Mathematics: A Basic Guide. John Wiley and Sons. p. 252. ISBN 0-471- 15045-2.

[2] Weisstein, Eric W., “Inequation”, MathWorld. Chapter 29

Proofs involving the addition of natural numbers

Mathematical proofs for addition of the natural numbers: additive identity, commutativity, and associativity. These proofs are used in the article Addition of natural numbers.

29.1 Definitions

This article will use the Peano axioms for the definitions of addition of the natural numbers, and the successor function S(a). In particular: For the proof of commutativity, it is useful to define another natural number closely related to the successor function, namely “1”. We define 1 to be the successor of 0, in other words,

1 = S(0).

Note that for all natural numbers a,

29.2 Proof of associativity

We prove associativity by first fixing natural numbers a and b and applying induction on the natural number c. For the base case c = 0,

(a+b)+0 = a+b = a+(b+0)

Each equation follows by definition [A1]; the first with a + b, the second with b. Now, for the induction. We assume the induction hypothesis, namely we assume that for some natural number c,

(a+b)+c = a+(b+c)

Then it follows, In other words, the induction hypothesis holds for S(c). Therefore, the induction on c is complete.

29.3 Proof of identity element

Definition [A1] states directly that 0 is a right identity. We prove that 0 is a left identity by induction on the natural number a.

171 172 CHAPTER 29. PROOFS INVOLVING THE ADDITION OF NATURAL NUMBERS

For the base case a = 0, 0 + 0 = 0 by definition [A1]. Now we assume the induction hypothesis, that 0 + a = a. Then This completes the induction on a.

29.4 Proof of commutativity

We prove commutativity (a + b = b + a) by applying induction on the natural number b. First we prove the base cases b = 0 and b = S(0) = 1 (i.e. we prove that 0 and 1 commute with everything). The base case b = 0 follows immediately from the identity element property (0 is an additive identity), which has been proved above: a + 0 = a = 0 + a. Next we will prove the base case b = 1, that 1 commutes with everything, i.e. for all natural numbers a, we have a + 1 = 1 + a. We will prove this by induction on a (an induction proof within an induction proof). Clearly, for a = 0, we have 0 + 1 = 0 + S(0) = S(0 + 0) = S(0) = 1 = 1 + 0. Now, suppose a + 1 = 1 + a. Then This completes the induction on a, and so we have proved the base case b = 1. Now, suppose that for all natural numbers a, we have a + b = b + a. We must show that for all natural numbers a, we have a + S(b) = S(b) + a. We have This completes the induction on b.

29.5 See also

• Binary operation • Proof

• Ring

29.6 References

• Edmund Landau, Foundations of Analysis, Chelsea Pub Co. ISBN 0-8218-2693-X. 29.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 173

29.7 Text and image sources, contributors, and licenses

29.7.1 Text

• Additive identity Source: https://en.wikipedia.org/wiki/Additive_identity?oldid=635820081 Contributors: David Shay, Chocolateboy, Giftlite, Bobo192, Nsaa, Arthena, TheArmadillo, Salix alba, RussBot, Michael Slone, SmackBot, Elonka, Melchoir, Miquonranger03, Silly rabbit, SchfiftyThree, Octahedron80, MaxSem, FlyingToaster, Cydebot, Xantharius, Kilva, Mhaitham.shammaa, Hannes Eder, Gol- gofrinchian, Trusilver, SJP, ABF, Vector Potential, Paolo.dL, Oxymoron83, Lightmouse, Techman224, JackSchmidt, RegentsPark, Clue- Bot, Niceguyedc, Versus22, Addbot, CanadianLinuxUser, West.andrew.g, Luckas-bot, Piano non troppo, Mechamind90, GrouchoBot, Pinethicket, Brad Polard, Buffmax5, ClueBot NG, Wcherowi, Rm1271, Jeremy112233, George8211, Rupallavi bhartiya and Anonymous: 39 • Additive inverse Source: https://en.wikipedia.org/wiki/Additive_inverse?oldid=667890056 Contributors: Toby Bartels, Ubiquity, Patrick, Uyanga, Pizza Puzzle, Schneelocke, Charles Matthews, Dysprosia, Robbot, Giftlite, Monedula, Eequor, Joyous!, Cacycle, Paul August, RoyBoy, Bobo192, Haham hanuka, Jérôme, Alansohn, Burn, Alai, Drbreznjev, Oleg Alexandrov, Camw, MFH, SixWingedSeraph, Jsha- dias, Missmarple, Adjusting, Salix alba, FlaBot, Mathbot, Chobot, Waitak, Michael Slone, Meersan, Dbfirs, Googl, Theda, DGaw, KnightRider~enwiki, SmackBot, RDBury, Maksim-e~enwiki, Obhave, Incnis Mrsi, Melchoir, Laaabaseball, KocjoBot~enwiki, BiT, Gilliam, Betacommand, JAn Dudík, Silly rabbit, Octahedron80, Pboyd04, UU, Cybercobra, Rigadoun, Mets501, EdC~enwiki, JForget, CRGreathouse, Makeemlighter, Dr.enh, Alaibot, Arcayne, Thijs!bot, AntiVandalBot, RobotG, Seaphoto, Mhaitham.shammaa, Edok- ter, Random user 8384993, B.S. Lawrence, Magioladitis, David Eppstein, Defenestrating Monday, Matqkks, Shucko, The Anonymous One, J.delanoy, Bogey97, FruitMart07, VolkovBot, Bacchus87, TXiKiBoT, NPrice, Synthebot, Fischer.sebastian, Dogah, SieBot, Mal- colmxl5, Tiptoety, JackSchmidt, ClueBot, Gawaxay, Ndenison, Alexbot, Thingg, SoxBot III, Addbot, Ronhjones, Jncraton, CactusWriter, LaaknorBot, Tide rolls, Luckas-bot, Gobbleswoggler, AnomieBOT, AdjustShift, Ufim, Xqbot, El Caro, GrouchoBot, Intelligentsium, MacMed, Rushbugled13, Hluup, Clarkcj12, SeoMac, Specs112, JV Smithy, Rishibee, EmausBot, ZéroBot, Wayne Slam, DavidHolmes0, DASHBotAV, ClueBot NG, ElphiBot, Kevkev911, DeltaCommand, Catnipeverdeen, Untouchable21-5, Comfr, DarthSivius, Soni, Lu- gia2453, Daniblue12345, Ty5ege5y54y765h56, Ginsuloft, 2431423erwq, UNALAQ123456789, Karlaghasdiabetes, Buttnuggetmcgee and Anonymous: 118 • Algebraic expression Source: https://en.wikipedia.org/wiki/Algebraic_expression?oldid=670147170 Contributors: Nealmcb, Charles Matthews, Asparagus, Horatio, Iantresman, Discospinster, WadeSimMiser, BD2412, Bgwhite, Moe Epsilon, Sandstein, Yamaguchi, Lambiam, Leonardushutabarat, RekishiEJ, OrenBochman, Yintan, JP.Martin-Flatin, SchreiberBike, Addbot, Some jerk on the Internet, AnomieBOT, Materialscientist, Isheden, 78.26, Pinethicket, Serols, Duoduoduo, Noommos, John of Reading, Chewings72, ClueBot NG, Satellizer, Frietjes, Titodutta, Guy vandegrift, Mifter Public, Mark Arsten, Altaïr, Justincheng12345-bot, Mediran, Frosty, Fox2k11, Jochen Burghardt, Kude90, Yardimsever, Tentinator, DavidLeighEllis, Ugog Nizdast, Ethically Yours, 12345678910Haha!, JMP EAX, Loraof, Funsize7676 and Anonymous: 65 • Algebraic fraction Source: https://en.wikipedia.org/wiki/Algebraic_fraction?oldid=633317544 Contributors: Michael Hardy, Hyacinth, Bearcat, LeoNomis, Cydebot, Katharineamy, Marc van Leeuwen, Addbot, Ptbotgourou, Isheden, EmausBot, D.Lazard, Wikfr, Guilherme Moura, Lugia2453 and Anonymous: 2 • Algebraic operation Source: https://en.wikipedia.org/wiki/Algebraic_operation?oldid=672023533 Contributors: Iantresman, Gdrbot, Octahedron80, Magioladitis, Addbot, Isheden, Frietjes, Illia Connell, Makecat-bot and Anonymous: 1 • Associative property Source: https://en.wikipedia.org/wiki/Associative_property?oldid=671922176 Contributors: AxelBoldt, Zundark, Jeronimo, Andre Engels, XJaM, Christian List, Toby~enwiki, Toby Bartels, Patrick, Michael Hardy, Ellywa, Andres, Pizza Puzzle, Ideyal, Charles Matthews, Dysprosia, Kwantus, Robbot, Mattblack82, Wikibot, Wereon, Robinh, Tobias Bergemann, Giftlite, Smjg, Lethe, Herbee, Brona, Jason Quinn, Deleting Unnecessary Words, Creidieki, Rich Farmbrough, Guanabot, Paul August, Rgdboer, Jum- buck, Alansohn, Gary, Burn, H2g2bob, Bsadowski1, Oleg Alexandrov, Linas, Palica, Graham87, Deadcorpse, SixWingedSeraph, Fre- plySpang, Yurik, Josh Parris, Rjwilmsi, Salix alba, Mathbot, Chobot, YurikBot, Thane, AdiJapan, BOT-Superzerocool, Bota47, Banus, KnightRider~enwiki, Mmernex, Melchoir, PJTraill, Thumperward, Fuzzform, SchfiftyThree, Octahedron80, Zven, Furby100, Wine Guy, Smooth O, Cybercobra, Cothrun, Will Beback, SashatoBot, IronGargoyle, Physis, 16@r, JRSpriggs, CBM, Gregbard, Xtv, Mr Gronk, Rbanzai, Thijs!bot, Egriffin, Marek69, Escarbot, Salgueiro~enwiki, DAGwyn, David Eppstein, JCraw, Anaxial, Trusilver, Acalamari, Pyrospirit, Krishnachandranvn, Daniel5Ko, GaborLajos, Darklich14, LokiClock, AlnoktaBOT, Rei-bot, Wikiisawesome, SieBot, Ger- akibot, Flyer22, Hello71, Trefgy, Classicalecon, ClueBot, The Thing That Should Not Be, Cliff, Razimantv, Mudshark36, Auntof6, DragonBot, Watchduck, Bender2k14, SoxBot III, Addbot, Some jerk on the Internet, Download, BepBot, Tide rolls, Jarble, Luckas-bot, Yobot, TaBOT-zerem, Kuraga, MauritsBot, GrouchoBot, PI314r, Charvest, Ex13, Erik9bot, MacMed, Pinethicket, MastiBot, Fox Wil- son, Ujoimro, GoingBatty, ZéroBot, NuclearDuckie, Quondum, D.Lazard, EdoBot, Crown Prince, ClueBot NG, Wcherowi, Kevin Gor- man, Widr, Furkaocean, IkamusumeFan, SuperNerd137, Rang213, Stephan Kulla, Lugia2453, Jshflynn, Epicgenius, Kenrambergm2374, Matthew Kastor, Subcientifico, Monarchrob1 and Anonymous: 119 • Brahmagupta’s identity Source: https://en.wikipedia.org/wiki/Brahmagupta’{}s_identity?oldid=559092695 Contributors: Michael Hardy, Eric Kvaalen and Mild Bill Hiccup • Brahmagupta–Fibonacci identity Source: https://en.wikipedia.org/wiki/Brahmagupta%E2%80%93Fibonacci_identity?oldid=651792307 Contributors: XJaM, Michael Hardy, Dominus, Shoecream, Charles Matthews, Dysprosia, Jay, Sabbut, Onebyone, JB82, Giftlite, Jao, Eric Kvaalen, Oleg Alexandrov, Shreevatsa, Kbdank71, FlaBot, Deeptrivia, Dmharvey, SmackBot, McGeddon, Jagged 85, Eskimbot, Titus III, RekishiEJ, Pahio, AlekseyP, SteveMcCluskey, Mhaitham.shammaa, Ricardo sandoval, Mikemtha, Madhava 1947, Kmhkmh, Alexis Humphreys, Saddhiyama, Alexbot, Muro Bot, Addbot, AkhtaBot, PV=nRT, Luckas-bot, Yobot, Xqbot, Entropeter, Plot Spoiler, Double sharp, Thái Nhi, Archaicmath, Duoduoduo, ZéroBot, Sapphorain, Helpful Pixie Bot, Kfcdesuland, Hmainsbot1 and Anonymous: 21 • Carlyle circle Source: https://en.wikipedia.org/wiki/Carlyle_circle?oldid=662560726 Contributors: Michael Hardy, Rjwilmsi, Magiola- ditis, Krishnachandranvn, Randy Kryn, Yobot, BG19bot, Monkbot, Jamesfengcao and Anonymous: 3 • Change of variables Source: https://en.wikipedia.org/wiki/Change_of_variables?oldid=670163643 Contributors: Michael Hardy, Giftlite, Lethe, Macrakis, Rich Farmbrough, Bugg, Oleg Alexandrov, SmackBot, Meltingwax, Lambiam, Ben pcc, Magioladitis, Elkost, Fylwind, Izno, Flyer22, UnCatBot, Jarble, Yobot, AnomieBOT, GrimFang4, Ayda D, Isheden, Erik9bot, Constructive editor, Nicolas Perrault III, Trinidade, D.Lazard, Kilopi, KLBot2, CsDix, Yardimsever, Katterjohn, Tentinator, Csjacobs24 and Anonymous: 14 174 CHAPTER 29. PROOFS INVOLVING THE ADDITION OF NATURAL NUMBERS

• Commutative property Source: https://en.wikipedia.org/wiki/Commutative_property?oldid=672555787 Contributors: AxelBoldt, Zun- dark, Tarquin, Andre Engels, Christian List, Toby~enwiki, Toby Bartels, Patrick, Michael Hardy, Wshun, Ixfd64, GTBacchus, Ahoerste- meier, Snoyes, Jll, Pizza Puzzle, Ideyal, Charles Matthews, Wikiborg, Dysprosia, Jeffq, Robbot, RedWolf, Romanm, Robinh, Isopropyl, Tobias Bergemann, Enochlau, Giftlite, BenFrantzDale, Lupin, Herbee, Peruvianllama, Waltpohl, Frencheigh, Gdr, Knutux, OverlordQ, B.d.mills, Chris Howard, Mormegil, Rich Farmbrough, Ebelular, Mikael Brockman, Dbachmann, Paul August, MisterSheik, El C, Szquir- rel, Touriste, Samadam, Malcolm rowe, Jumbuck, Arthena, Mattpickman, Mlessard, Burn, Mlm42, Stillnotelf, Tony Sidaway, Oleg Alexandrov, The JPS, Linas, Justinlebar, Jeff3000, Palica, Ashmoo, Graham87, Josh Parris, Rjwilmsi, Salix alba, Vegaswikian, FlaBot, VKokielov, Ground Zero, Srleffler, Masnevets, Reetep, YurikBot, Hairy Dude, Wolfmankurd, Michael Slone, Rick Norwood, Samuel Huang, Derek.cashman, FF2010, Petri Krohn, Vicarious, SmackBot, YellowMonkey, Slashme, Melchoir, KocjoBot~enwiki, Jab843, PJTraill, Chris the speller, Bluebot, Master of Puppets, Thumperward, SchfiftyThree, Complexica, Octahedron80, DHN-bot~enwiki, Jus- tUser, Cybercobra, Wybot, Thehakimboy, Acdx, Bando26, 16@r, Childzy, Dan Gluck, Iridescent, Dreftymac, DBooth, Robert.McGibbon, Floridi~enwiki, Unmitigated Success, Gregbard, MichaelRWolf, Cydebot, Larsnostdal, Kozuch, JamesAM, Headbomb, Second Quanti- zation, Wmasterj, Thomprod, Dzer0, Grayshi, Escarbot, Fr33ke, AntiVandalBot, Nacho Librarian, Gcm, 100110100, Mikemill, Wikidude- man, DAGwyn, Dirac66, JoergenB, MartinBot, Nev1, Daniele.tampieri, Haseldon, Policron, KylieTastic, Sarregouset, Useight, VolkovBot, TreasuryTag, Am Fiosaigear~enwiki, Philip Trueman, TXiKiBoT, Anonymous Dissident, Aaron Rotenberg, Geometry guy, Spinningspark, Life, Liberty, Property, SieBot, Ivan Štambuk, Legion fi, Toddst1, Flyer22, Xvani, Weston.pace, OKBot, Mike2vil, Francvs, Classi- calecon, ClueBot, Cliff, Bloodholds, R000t, CounterVandalismBot, Deathnomad, Excirial, Ftbhrygvn, Joe8824, Nafis ru, Stephen Pop- pitt, Addbot, LaaknorBot, SpBot, Gail, Luckas-bot, Yobot, Weisicong, DemocraticLuntz, Citation bot, MauritsBot, Xqbot, Dithridge, 12cookk, Ubcule, GrouchoBot, Omnipaedista, Dger, HamburgerRadio, MacMed, Pinethicket, Adlerbot, Psimmler, ThinkEnemies, JV Smithy, Onel5969, Ujoimro, Aceshooter, Slightsmile, Quondum, Joshlepaknpsa, Wayne Slam, Arnaugir, Scientific29, ClueBot NG, Wcherowi, Gilderien, Sayginer, Marechal Ney, Widr, AvocatoBot, Mark Arsten, Ameulen11, CeraBot, ChrisGualtieri, None but shining hours, Khazar2, Dexbot, Stephan Kulla, Fox2k11, Dskjhgds, DavidLeighEllis, Davidliu0421, Wikibritannica, Niallhoranluv123, JMP EAX, Troolium, Hinmatóowyalahtqit, Holt Mcdougal, ABCDEFAD, Fazbear7891 and Anonymous: 180 • Completing the square Source: https://en.wikipedia.org/wiki/Completing_the_square?oldid=656490842 Contributors: Tarquin, Nealmcb, Michael Hardy, Dominus, TakuyaMurata, Darkwind, Grendelkhan, David.Monniaux, Meelar, Marc Venot, Giftlite, Ezhiki, CryptoDerk, LucasVB, Fangz, Colinb, Freakofnurture, Discospinster, Pie4all88, SocratesJedi, Paul August, Janderk, Brian0918, Rgdboer, Chirag, Mikeo, Oleg Alexandrov, Gerd Breitenbach, Zeroparallax, Krishnavedala, DVdm, PegasusRoe, YurikBot, Michael Slone, WAvegetarian, KSmrq, 48v, Discopriest, Goffrie, Sir48, Wolf1728, SmackBot, KnowledgeOfSelf, Pkirlin, Alksub, MediaMangler, Silly rabbit, Ioscius, Louisng114, Skinnyweed, Lambiam, Cronholm144, Jim.belk, Fangfufu, Vanished user, Michaelbusch, JoeBot, JRSpriggs, Mikiemike, CmdrObot, Ziggaroth, Mato, RZ heretic, Christian75, Xantharius, Thijs!bot, Twiz, Pauldf, 41523, PhiTower, MER-C, Arvinder.virk, VoABot II, JamesBWatson, Baccyak4H, Bcherkas, JoergenB, Huadpe, Yessopie, J.delanoy, Lantonov, Austinflorida, Natl1, JohnBlack- burne, Anonymous Dissident, Kmhkmh, AlleborgoBot, Quietbritishjim, Tiddly Tom, Aa74233, KirbyMaster14, Macy, Atif.t2, ClueBot, Fyyer, Dobermanji, The Thing That Should Not Be, Excirial, Alexbot, KnowledgeBased, Aitias, HumphreyW, Buba14, Mazemaster225, WikHead, Sabalka, Addbot, AVand, Tcncv, Cst17, Download, Glane23, Zorrobot, Jarble, Luckas-bot, Yobot, Pcap, AnomieBOT, Nip- pashish, Pasanbhathiya2, BRUTE, Math.geek3.1415926, Suffusion of Yellow, Taher2000, Sf333, Thomas97531, EmausBot, Orphan Wiki, Big Bonus Bankers, ZéroBot, ClueBot NG, Parcly Taxel, Idearella, Wiki13, Brad7777, BattyBot, Freesodas, Sriharsh1234, Glen Behrend, CarnivorousBunny, Mapswhets2 and Anonymous: 151 • Constant term Source: https://en.wikipedia.org/wiki/Constant_term?oldid=607877610 Contributors: Andre Engels, Ixfd64, Giftlite, Horatio, TexasAndroid, SmackBot, Lambiam, Jim.belk, Happy-melon, Mhaitham.shammaa, JohnBlackburne, Dreamfall, Aitias, SoxBot III, Marc van Leeuwen, Dawynn, Fgnievinski, Wikimichael22, Tide rolls, Legobot, Math Champion, Isheden, Erik9bot, Techhead7890, EmausBot, ClueBot NG, Gilderien, Wiki13 and Anonymous: 15 • Cube root Source: https://en.wikipedia.org/wiki/Cube_root?oldid=654010268 Contributors: Tarquin, Arvindn, Michael Hardy, Jimf- bleak, Ugen64, Lee M, Wfeidt, Dysprosia, Jitse Niesen, WhisperToMe, Jaredwf, Fredrik, Giftlite, Gene Ward Smith, Monedula, Frencheigh, Rjyanco, CryptoDerk, Shen, Boism, Qef, Paul August, Geoking66, Plugwash, Edward Z. Yang, RoyBoy, Verement, Jakew, Batmanand, Mikeo, Oleg Alexandrov, Sanjaymjoshi, Linas, Josh Parris, FlaBot, Ecb29, Margosbot~enwiki, Glenn L, Phantomsteve, Crazytales, Hede2000, JabberWok, Alpertron, Arthur Rubin, Cobblet, SmackBot, Maksim-e~enwiki, Jclerman, Monkeyblue, Eskimbot, Gilliam, Tytrain, Bluebot, Chortos-2, SMP, Manecke, Cybercobra, Bcurfs, Ff123, Loadmaster, Mets501, MTSbot~enwiki, Dlohcierekim, CR- Greathouse, Lighthead, NickW557, Xtv, TheTruthiness, AntiVandalBot, Woollymammoth, Mhaitham.shammaa, Leuko, JamesBWatson, Aielyn, DerHexer, Edward321, Joeabauer, Anaxial, Arithmonic, J.delanoy, Larry R. Holmgren, DoorsAjar, Gauge00, BotKung, Maxim, Dmcq, Dogah, Rlendog, Lourakis, ClueBot, Jusdafax, DeltaQuad, Computer97, Ltman7166, Kiensvay, Paulsheer, Little Mountain 5, Addbot, Chris19910, MrOllie, Download, Missingdata1, PV=nRT, LuK3, Angrysockhop, AnomieBOT, Killiondude, Sandip90, Am- mubhave, Drilnoth, Rothgo, Shirik, N419BH, Shadowjams, Sławomir Biały, MacMed, MJ94, Btilm, OldManNIck, Duoduoduo, Leonid 2, Robot8A, Kjarda, Lolzqueen, Brambleclawx, DARTH SIDIOUS 2, Mean as custard, Kiran Gopi, Mirificium, EWikist, Donner60, Damirgraffiti, ClueBot NG, Jack Greenmaven, LutherVinci, Rurik the Varangian, JZCL, KeyBlade999, ChrisGualtieri, Dexbot, Poka- janje, JPaestpreornJeolhlna, Ajayroy2020, Loraof, Saud naeem and Anonymous: 119 • Cubic function Source: https://en.wikipedia.org/wiki/Cubic_function?oldid=673933099 Contributors: Stevertigo, Michael Hardy, Dcljr, Strebe, AugPi, Charles Matthews, Dino, Jitse Niesen, Saltine, Frazzydee, Chuunen Baka, Donarreiskoffer, Robbot, Tosha, Jpo, Giftlite, BenFrantzDale, Nova77, Alberto da Calvairate~enwiki, Agro r, Kundor, Haham hanuka, LutzL, Lectonar, PAR, Oleg Alexandrov, 2004- 12-29T22:45Z, Tabletop, Eatsaq, Koassim, Zzyzx11, Gisling, Graham87, MauriceJFox3, Rjwilmsi, Volfy, Mathbot, RexNL, Mark J, Wars, Fephisto, Scythe33, Turidoth, Krishnavedala, DVdm, Roboto de Ajvol, Wavelength, Eraserhead1, MathMan64, Biolinker, Air- bete~enwiki, Mgnbar, Haihe, SamuelRiv, Arthur Rubin, Gesslein, Cmglee, AndrewWTaylor, SmackBot, Jagged 85, Nbarth, ACupOf- Coffee, Tavianator, Lhf, Meni Rosenfeld, Titus III, Olin, Jim.belk, BillFlis, Newone, Jbolden1517, SkyWalker, Hacktivist, INVERTED, Myasuda, Ntsimp, MC10, Verdy p, Nein~enwiki, Hugozam, Headbomb, Bobblehead, BigJohnHenry, Seaphoto, Михајло Анђелковић, John.d.page, Mhaitham.shammaa, TK-925, [email protected], Lklundin, JAnDbot, Sangwinc, JamesBWatson, Pixel ;-), Anaxial, Glrx, R'n'B, Laurusnobilis, KIAaze, Policron, Heero Kirashami, Fylwind, Celtic Minstrel, VolkovBot, Pleasantville, Dmcq, SieBot, Ttony21, ClueBot, Admiral Norton, Binksternet, Mod.torrentrealm, Bobathon71, Justin W Smith, LizardJr8, Terets, Byafet, He7d3r, KyuubiSeal, ,CarsracBot, TStein, Lightbot, Kiril Simeonovski, Zorrobot ,חצרוני ,Mprager, Kiensvay, N.Mori, YouRang?, Addbot, GardinerNeDay Martenjan, Zoho, Yobot, Roviury, Halothane, AnomieBOT, Murugesh108, GaussianV, ArthurBot, LilHelpa, Mthw2vc, RibotBOT, See- leschneider, Gabiteodoru, Tubk02, Canned Soul, Geometryfan, Doraemonpaul, Tobby72, Aliotra, Sag2000, D'ohBot, HugoMeder, Ci- tation bot 1, Wandering-teacher, Alidev, Jujutacular, Double sharp, Duoduoduo, Vmohanaraj, EmausBot, Skysmurf, Syncategoremata, Tommy2010, TeleComNasSprVen, ZéroBot, Knight1993, Jt6195, D.Lazard, Mathkt, Don hinson, Donner60, Scientific29, Michael- nikolaou, Adsmt, Nickalh50, Jrsanthosh, CharlieEchoTango, Kanidhappuli, Mikhail Ryazanov, ClueBot NG, Versatranitsonlywaytofly, 29.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 175

Lanthanum-138, Rurik the Varangian, Dipan.pal.10, Helpful Pixie Bot, Kidd Varrow, ServiceAT, GKFX, CitationCleanerBot, Pokajanje, Frivolous Consultant, Rmehtany, , Tentinator, Michaeljiz, Alexvong1995~enwiki, Zlk1214, BethNaught, Ruchira S Gamage, Duckfar, Loraof, Cohengar, User000name and Anonymous: 241 • Difference of two squares Source: https://en.wikipedia.org/wiki/Difference_of_two_squares?oldid=661905760 Contributors: Damian Yerrick, Bth, Charles Matthews, Grendelkhan, ClemRutter, Giftlite, Jao, Izzycat, Vsb, Linas, Kbdank71, Glenn L, Krishnavedala, DVdm, Adoniscik, Jaxl, Brisvegas, Banana04131, HereToHelp, User24, SmackBot, Incnis Mrsi, Nbarth, CRGreathouse, Thijs!bot, Bhowmickr, Navigatr85, Mentifisto, Mhaitham.shammaa, Baccyak4H, David Eppstein, Fylwind, Kmhkmh, RexJacobus, Sap00acm, ClueBot, Libcub, Addbot, AnomieBOT, Materialscientist, Isheden, Thehchl, Erik9bot, Tynubz, Jaymom11, Donner60, ClueBot NG, Wcherowi, Kanga- roopower, Mocky3497, Qetuth, BattyBot, DEZerfassJr, Ginsuloft, K9re11, Timethanandrian, GeoffreyT2000 and Anonymous: 43 • Distributive property Source: https://en.wikipedia.org/wiki/Distributive_property?oldid=665647722 Contributors: AxelBoldt, Tarquin, Youssefsan, Toby Bartels, Patrick, Xavic69, Michael Hardy, Andres, Ideyal, Dysprosia, Malcohol, Andrewman327, Shizhao, PuzzletChung, Romanm, Chris Roy, Wikibot, Tobias Bergemann, Giftlite, Markus Krötzsch, Dissident, Nodmonkey, Mike Rosoft, Smimram, Discospin- ster, Paul August, ESkog, Rgdboer, EmilJ, Bobo192, Robotje, Smalljim, Jumbuck, Arthena, Keenan Pepper, Mykej, Bsadowski1, Blax- thos, Linas, Evershade, Isnow, Marudubshinki, Salix alba, Vegaswikian, Nneonneo, Bfigura, FlaBot, [email protected], Mathbot, Andy85719, Ichudov, DVdm, YurikBot, Michael Slone, Grafen, Trovatore, Bota47, Banus, Melchoir, Bluebot, Ladislav the Posthu- mous, Octahedron80, UNV, Jiddisch~enwiki, Khazar, FrozenMan, Bando26, 16@r, Dicklyon, EdC~enwiki, Engelec, Exzakin, Jokes Free4Me, Simeon, Gregbard, Thijs!bot, Barticus88, Marek69, Nezzadar, Escarbot, Mhaitham.shammaa, Salgueiro~enwiki, JAnDbot, Onkel Tuca~enwiki, Acroterion, Numbo3, Katalaveno, AntiSpamBot, GaborLajos, Lyctc, Idioma-bot, Janice Margaret Vian, Montchav, TXiKiBoT, Anonymous Dissident, Dictouray, Oxfordwang, Martin451, Skylarkmichelle, Jackfork, Enviroboy, Dmcq, AlleborgoBot, Gerakibot, Bentogoa, Flyer22, Radon210, Hello71, ClueBot, The Thing That Should Not Be, Cliff, Mild Bill Hiccup, Niceguyedc, Gold- kingtut5, Excirial, Jusdafax, NuclearWarfare, NERIC-Security, Pichpich, Mm40, Addbot, Jojhutton, Ronhjones, Zarcadia, Favonian, Squandermania, Jarble, Ben Ben, Legobot, Luckas-bot, AnomieBOT, Materialscientist, NFD9001, Greatfermat, False vacuum, Ribot- BOT, Pinethicket, I dream of horses, MastiBot, Andrea105, Slon02, Saul34, J36miles, John of Reading, Davejohnsan, Orphan Wiki, Super48paul, Sp33dyphil, Slawekb, Quondum, BrokenAnchorBot, TyA, Donner60, Chewings72, DASHBotAV, AlecJansen, ClueBot NG, Wcherowi, IfYouDoIfYouDon't, Dreth, O.Koslowski, Asukite, Widr, Vibhijain, Helpful Pixie Bot, Pmi1924, BG19bot, TCN7JM, Dan653, Forkloop, CallofDutyboy9, EuroCarGT, Sandeep.ps4, Christian314, Ivashikhmin, Mogism, Makecat-bot, Lugia2453, Gphilip, Brirush, Wywin, BB-GUN101, ElHef, DavidLeighEllis, Shaun9876, Pkramer2021, Kitkat1234567880, Kcolemantwin3, Gracecandy1143, David88063, Abruce123412, Amortias, Solid Frog, Jj 1213 wiki, Dalangster, Iwamwickham, 123456me123456, Some1Redirects4You and Anonymous: 239 • Elementary algebra Source: https://en.wikipedia.org/wiki/Elementary_algebra?oldid=674306105 Contributors: AxelBoldt, Zundark, -- April, Isis~enwiki, Netesq, Michael Hardy, Fred Bauder, Wapcaplet, Den fjättrade ankan~enwiki, LittleDan, Julesd, AugPi, Charles Matthews, Dysprosia, Grendelkhan, Saltine, Robbot, Bkell, Nikitadanilov, Marc Venot, Giftlite, Wwoods, Guanaco, Phe, Kusunose, Any- thingyouwant, Kevin B12, Iantresman, Naryathegreat, Guanabot, Florian Blaschke, Paul August, Tompw, Obradovic Goran, Musiphil, Rgclegg, Ahruman, Lectonar, Idont Havaname, Oleg Alexandrov, Zntrip, Nuno Tavares, Linas, Karnesky, StradivariusTV, Lincher, Ruud Koot, Mandarax, SqueakBox, Graham87, Josh Parris, Feydey, FlaBot, Eubot, Mathbot, Fresheneesz, DaveG, Silivrenion, Hermajesty, DVdm, Roboto de Ajvol, YurikBot, Wavelength, Dmharvey, [email protected], SpuriousQ, Gaius Cornelius, EngineerScotty, Justin Eiler, DeadEyeArrow, SimonMorgan, Closedmouth, Gesslein, Kiv, Sardanaphalus, SmackBot, Diggers2004, Melchoir, Aianrnoens, Es- kimbot, B.Wind, Skizzik, MalafayaBot, Silly rabbit, Spellchecker, Colonies Chris, Can't sleep, clown will eat me, Andy120290, Pax85, SundarBot, Michaelrccurtis, BullRangifer, Sigma 7, TenPoundHammer, The undertow, Alexander256, Loodog, Phancy Physicist, Ckatz, Hogyn Lleol, Rschwieb, CRGreathouse, Yaris678, Odie5533, Chrislk02, Xantharius, Thijs!bot, Kilva, Mojo Hand, Urdutext, AntiVan- dalBot, RobotG, Jj137, Dylan Lake, Mathnerd, Wing Nut, Gdo01, Wayiran, Spacefarer, 41523, Sangwinc, The Transhumanist, Hut 8.5, Magioladitis, Hroðulf, EulerGamma, David Eppstein, GermanX, Kayau, Matqkks, MartinBot, Rettetast, Maurice Carbonaro, Dark- spots, Gombang, CompuChip, Quadrivium, Scott.leishman, JohnBlackburne, LokiClock, Sweetness46, Chrisspurgeon, Kevin Steinhardt, Lincolnshep, WikiWonki222, LeaveSleaves, Wikiisawesome, Enviroboy, Mitch patrikus, Runewiki777, SieBot, Da Joe, Interchange88, Denisarona, ClueBot, Rumping, PipepBot, Drmies, Terets, Robert Skyhawk, Sun Creator, Cenarium, Scottmueller, XLinkBot, Kal-El- Legobot, Luckas-bot, Yobot, AnakngAraw, Materialscientist, Xqbot, TinucherianBot ,55דוד ,Bot, Steeler2001, Ronhjones, CarsracBot II, Boopbee, DSisyphBot, Magicxcian, Sdobine, NoZaMa, Paul Hsieh, FrescoBot, Crabchops, Rckrone, Buskadeiro, Kislev, Mikrosam Akademija 1, Jauhienij, Young gee 1525, Kvrajiv1893k, Aaronm19, Diannaa, Xnn, EmausBot, FalseAxiom, Netheril96, We hope, Ultra- Joosh, Lilgurl100, Quondum, D.Lazard, Paulmiko, Sven nestle2, Superbrutaka07, Chewings72, Scientific29, ChuispastonBot, ClueBot NG, Frietjes, Widr, Lincoln Josh, Helpful Pixie Bot, DBigXray, Akohart, Silverblade234, DPBT1, Motchy~enwiki, Brad7777, Infini- tyMiner, Paulcoch, ChrisGualtieri, Jtredden, Myeisha14, Obtund, Mark L MacDonald, Jochen Burghardt, Freecall8, Maxlizy, Jamesm- cmahon0, Yardimsever, Paleowham, Loraof, SGA314, Lebron James IV and Anonymous: 178 • Equating coefficients Source: https://en.wikipedia.org/wiki/Equating_coefficients?oldid=645768524 Contributors: Michael Hardy, Zzyzx11, SmackBot, Lambiam, JL-Bot, Addbot, Luckas-bot, Erik9bot, Loraof and Anonymous: 5 • Equation Source: https://en.wikipedia.org/wiki/Equation?oldid=670209685 Contributors: AxelBoldt, Brion VIBBER, Vicki Rosen- zweig, Bryan Derksen, Tarquin, Youssefsan, Christian List, Toby Bartels, Youandme, Olivier, Chas zzz brown, Michael Hardy, Domi- nus, Delirium, Ellywa, Iulianu, Suisui, Andres, Mxn, Pizza Puzzle, Charles Matthews, Timwi, Dysprosia, Robbot, Fredrik, Henrygb, Alan Liefting, Giftlite, Pretzelpaws, Tom harrison, Zaphod Beeblebrox, Cap601, Karl Dickman, Abdull, CALR, Discospinster, Mani1, Paul August, Andrejj, Bobo192, AllyUnion, Obradovic Goran, HasharBot~enwiki, Jumbuck, Orzetto, Alansohn, Cdc, Shoefly, HenryLi, Oleg Alexandrov, Nuno Tavares, Linas, StradivariusTV, WadeSimMiser, Isnow, Zzyzx11, Mandarax, Graham87, Magister Mathemat- icae, BD2412, Island, Josh Parris, MarSch, Quiddity, Salix alba, FlaBot, Chobot, Nagytibi, DVdm, YurikBot, Wavelength, PiAnd- WhippedCream, RussBot, Chaos, Rick Norwood, Wiki alf, Caseyh, ManoaChild, Zzuuzz, Mike Dillon, Arthur Rubin, Pb30, Gesslein, GrinBot~enwiki, Asterion, TravisTX, Sardanaphalus, Veinor, RDBury, Incnis Mrsi, KnowledgeOfSelf, Melchoir, Unyoyega, Jagged 85, Hardyplants, Gilliam, Carl.bunderson, Kurykh, Keegan, PrimeHunter, MalafayaBot, Octahedron80, Can't sleep, clown will eat me, Ioscius, Onorem, Addshore, SundarBot, Cybercobra, Jiddisch~enwiki, Lambiam, Eliyak, Heraclesprogeny, Hu12, IvanLanin, Nethac DIU, Tawkerbot2, CRGreathouse, Scohoust, Dgw, MarsRover, Freakoclark, FilipeS, AndrewHowse, Gogo Dodo, QRX, Christian75, Nsaum75, Thijs!bot, Epbr123, Marek69, John254, Mailseth, Seaphoto, Quintote, Danger, Karadimos, Samar, Septembrinol, Maias, Ma- gioladitis, Bongwarrior, VoABot II, JamesBWatson, Charlielee111, Riceplaytexas, Nyttend, Cic, Bcherkas, David Eppstein, Martynas Patasius, DerHexer, Khalid Mahmood, MartinBot, PrestonH, J.delanoy, Pharaoh of the Wizards, Numbo3, Terrek, Maurice Carbonaro, Eliz81, Salih, McSly, Spens10, Indeed123, AntiSpamBot, NewEnglandYankee, Cometstyles, STBotD, Treisijs, BoJosley, VolkovBot, AlnoktaBOT, TXiKiBoT, Antoni Barau, Anonymous Dissident, Ask123, Ocolon, RiverStyx23, Complex (de), Enigmaman, Synthebot, Symane, Thebisch, Netopalis, SieBot, Ivan Štambuk, Scarian, Iamthedeus, Caltas, Happysailor, Prestonmag, Oxymoron83, Techman224, 176 CHAPTER 29. PROOFS INVOLVING THE ADDITION OF NATURAL NUMBERS

Gordonofcartoon, Macy, Church, ClueBot, The Thing That Should Not Be, Boing! said Zebedee, Manishearth, DragonBot, Excirial, Estirabot, Lartoven, Rejka, Jotterbot, Xxphil, Scrunter, Versus22, SoxBot III, Corz0770, BodhisattvaBot, Kal-El-Bot, PL290, Rfdhas- gfjhfgvhmavjvm, Addbot, AVand, Bob is a bitch, Vchorozopoulos, CanadianLinuxUser, MrOllie, Soliquid, Kisbesbot, Sardur, Tide rolls, Frogger3140, Aaroncrick, Qwertol, Luckas-bot, Yobot, Kan8eDie, AnomieBOT, Jim1138, Dick Beldin, Coloroftheskywatch, Ol- lieFury, GB fan, Xqbot, Timir2, Jeffrey Mall, Grim23, NOrbeck, GrouchoBot, RibotBOT, Saalstin, FrescoBot, Xenoss, Hhhhhannah, Gagaspocket, Ragha joshi, Pinethicket, Elockid, LittleWink, Btilm, Jujutacular, Benbeltran, GregKaye, Vrenator, Duoduoduo, Nataev, IGraph, Danielklotz, Willnaish, Mean as custard, Alph Bot, Nlefr, EmausBot, FalseAxiom, Ibbn, TuHan-Bot, ZéroBot, Josve05a, Derek- leungtszhei, Access Denied, Mjj’sbff, D.Lazard, Wayne Slam, Ocaasi, DOwenWilliams, Maschen, Airolg10, ChuispastonBot, GrayFull- buster, ClueBot NG, Jkwchui, Frietjes, Firowkp, Widr, Helpful Pixie Bot, BG19bot, Furkhaocean, Leonxlin, Akanari, Altaïr, SodaAnt, Sparkie82, Zujua, Gelid123, ChrisGualtieri, A114112836, Jizballer245, Siuenti, Dexbot, Lugia2453, Jochen Burghardt, Makalrfekt, Brirush, Olgakarpushin11, Mark viking, Phyzics, Jmari0818, Matty.007, Theresalwaysanorman, TROLL12345, Sarah Joy Jones, Mimo, Ojanapothik, Gennaro Amendola 77, Muzikbox, Loraof, Asog07, Aergenteu, Ieididjdjjdid, Concaveisfag, KasparBot, This is a mobile phone and Anonymous: 294 • Euler’s four-square identity Source: https://en.wikipedia.org/wiki/Euler’{}s_four-square_identity?oldid=619012233 Contributors: Ax- elBoldt, Bryan Derksen, XJaM, Michael Hardy, TakuyaMurata, Minesweeper, Charles Matthews, Dcoetzee, Dysprosia, Onebyone, Robinh, Giftlite, Macrakis, Oleg Alexandrov, Magister Mathematicae, Kbdank71, FlaBot, YurikBot, Trovatore, Arthur Rubin, Smack- Bot, Titus III, Revolus, A876, AlekseyP, RobHar, ADLANE, AlleborgoBot, Radagast3, SieBot, Alexis Humphreys, Addbot, Мыша, PV=nRT, Luckas-bot, Yobot, KamikazeBot, RedBot, WikitanvirBot, ZéroBot, Lilith112358, HenryWJ and Anonymous: 2 • Extraneous and missing solutions Source: https://en.wikipedia.org/wiki/Extraneous_and_missing_solutions?oldid=671885731 Con- tributors: The Anome, Michael Hardy, Tango, JASpencer, Dcoetzee, Jitse Niesen, Bkell, Wiccan Quagga, SmackBot, PrimeHunter, Lambiam, Seaphoto, Res2216firestar, Tparameter, Holocene, MenoBot, R000t, Mattman13, Addbot, Luckas-bot, Xqbot, Doraemon- paul, Age Happens, Donner60, ClueBot NG and Anonymous: 20 • Factorization Source: https://en.wikipedia.org/wiki/Factorization?oldid=673014153 Contributors: AxelBoldt, LC~enwiki, Eloquence, Tarquin, XJaM, Stevertigo, Michael Hardy, Dcljr, Delirium, Yaakov~enwiki, Charles Matthews, Guaka, Dino, Dysprosia, Bloodshedder, Robbot, Andrsvoss, Fredrik, Schutz, Stephan Schulz, Romanm, Calmypal, Lowellian, Tobias Bergemann, Enochlau, Centrx, Giftlite, BenFrantzDale, Lupin, Tromer, 9ign, Isol8d, Utcursch, LiDaobing, Lockeownzj00, Karl-Henner, Imjustmatthew, Ukexpat, Azuredu, Discospinster, Rich Farmbrough, Invictus~enwiki, Wk muriithi, Slipstream, Paul August, MyNameIsNotBob, Crisófilax, EmilJ, Guiltys- park, Alansohn, Alkarex, Nuno Tavares, OwenX, Mpatel, Kelisi, MFH, BD2412, CarbonUnit, Edison, Salix alba, HappyCamper, Iamheredude, Mathbot, RexNL, Nivaca, Fresheneesz, MithrandirMage, Jared Preston, EamonnPKeane, Siddhant, Wavelength, Dmhar- vey, RussBot, Michael Slone, Widdma, [email protected], Hede2000, Almogo, Member, Wimt, McMaster, Moe Epsilon, Crazie 88, Googl, Zzuuzz, Arthur Rubin, Vicarious, Gesslein, Luk, RDBury, Kilo-Lima, Ariovistus, Rōnin, BiT, SchfiftyThree, Octahedron80, MaxSem, OOODDD, Karthik.raman, Cybercobra, Jiddisch~enwiki, Aditsu, Runcorn, Sigma 7, Lambiam, Bjankuloski06en~enwiki, 16@r, Mets501, Martin Kozák, Courcelles, CRGreathouse, Simeon, Funnyfarmofdoom, Doctormatt, Gogo Dodo, Goldencako, Death motor, Epbr123, Bhowmickr, Mojo Hand, Schneau, Marek69, Dfrg.msc, Mikesown, Escarbot, AntiVandalBot, Konman72, Orionus, Karadimos, JAnDbot, Austinmurphy, Andonic, Magioladitis, Chevinki, Redaktor, Carn, Thedreamdied, Error792, DerHexer, Philg88, AV-2, Rohan Ghatak, Lantonov, Supuhstar, Smcinerney, Policron, Burki1907, CardinalDan, Jeff G., Davidresseguie, Gwib, MartinPack- erIBM, Wolfrock, Ken Kuniyuki, Mayfare, Silver Spoon, Keilana, Venona2007, Paolo.dL, Smaug123, InfoLuver, Sap00acm, TuranLady, Iknowyourider, Kortaggio, Atif.t2, Damien Karras, ClueBot, Helenabella, FromUA, Excirial, Winston365, Dekisugi, WGH, XLinkBot, Marc van Leeuwen, Stickee, Avoided, Xali, Addbot, DB au, MrOllie, LaaknorBot, Pdfm ipe, Nguoimay, Tide rolls, Jarble, Legobot, Luckas-bot, Dede2008, N1RK4UDSK714, Materialscientist, Isheden, Лев Дубовой, Dollyhari, Byronwai~enwiki, Muslim-Researcher, ,Lotje, Duoduoduo, JebBodily, Aioghps, MMS2013, NerdyScienceDude ,کاشف عقیل ,MacMed, Pinethicket, Meaghan, Jujutacular Medoaziz17, JCRules, Stringfoto, ReneGMata, Snigdh Kumar, Mz7, Evanh2008, The Nut, AManWithNoPlan, Chewings72, ClueBot NG, Wcherowi, Satellizer, Mrjohncummings, Rizwanhabib007, Bacon narwhal, Brad7777, Shaun, Deltahedron, Fragershroom, Alfalfafa, Sandykumarbhatia, Marcell Mihály, Gealexandri, Loraof, Giulio.eula, DiscantX, Boehm and Anonymous: 228 • FOIL method Source: https://en.wikipedia.org/wiki/FOIL_method?oldid=659609168 Contributors: Michael Hardy, Dcljr, Hyacinth, Taxman, Centrx, DemonThing, Pmanderson, Alansohn, Kinu, Old Moonraker, TeaDrinker, KSmrq, Rsrikanth05, Lunch, SmackBot, Bluebot, Nbarth, Chrylis, Henning Makholm, Zchenyu, Jim.belk, Rnb, Jokes Free4Me, Tawkerbot4, The Obento Musubi, Kevinduh, JJ Harrison, GermanX, Tgeairn, Supuhstar, Brandenads, Oconnor663, Bentogoa, ClueBot, Mild Bill Hiccup, Deo Favente, Acabashi, XLinkBot, Wikimichael22, GraphicMarine, AnomieBOT, Lapkam13, Materialscientist, Citation bot, Reconsider the static, Robvanvee, Vrenator, Ronk01, ZéroBot, AvicAWB, ClueBot NG, Wcherowi, Korn99, Lanthanum-138, Lisonje, Pratyya Ghosh, AutomaticStrikeout, Webclient101, Somentac123, Jnanners26 and Anonymous: 49 • Geometry of roots of real polynomials Source: https://en.wikipedia.org/wiki/Geometry_of_roots_of_real_polynomials?oldid=674543880 Contributors: Michael Hardy, GTBacchus, EALacey, Gandalf61, SmackBot, Ikip, Colonel Warden, Cardamon, David Eppstein, Bongo- matic, Squids and Chips, Spinningspark, Hrafn, Malcolmxl5, Qwfp, Johnuniq, Verbal, Yobot, Backslash Forwardslash, Citation bot, Erik9bot, Abductive, Nassim Chloe Eghtebas, TeamQuaternion, Jujutacular, Duoduoduo, Helpful Pixie Bot, Solomon7968 and Anony- mous: 6 • Identity (mathematics) Source: https://en.wikipedia.org/wiki/Identity_(mathematics)?oldid=664994299 Contributors: Michael Hardy, SebastianHelm, Silverfish, Charles Matthews, Dysprosia, Populus, Tobias Bergemann, Giftlite, Rpresser, Linas, Isnow, Kbdank71, Natus- Roma, VKokielov, Eubot, Nihiltres, Scythe33, Salvatore Ingala, Ztutz~enwiki, YurikBot, Angus Lepper, Dmharvey, RussBot, Freiberg, Rick Norwood, Bota47, Tomisti, MathsIsFun, Gesslein, Maksim-e~enwiki, Incnis Mrsi, Melchoir, DolphinCompSci, Can't sleep, clown will eat me, Crazilla, Lambiam, Vriullop, Titus III, Kirbytime, DonkeyKong64, Neelix, Hydraton31, Konradek, Михајло Анђелковић, Mhaitham.shammaa, Salgueiro~enwiki, JAnDbot, Arch dude, Magioladitis, Tgeairn, Trusilver, Enix150, Austinmohr, LokiClock, The Tetrast, Enviroboy, Plinkit, Alex9788, ClueBot, Biggerj1, Mild Bill Hiccup, DragonBot, Alexbot, LieAfterLie, Qwfp, SilvonenBot, Ad- dbot, AkhtaBot, CarsracBot, ChenzwBot, Luckas-bot, AnomieBOT, Flewis, Xqbot, Tomseyboycool, SassoBot, 78.26, Deadcracker, Paine Ellsworth, DrilBot, LittleWink, PrincessofLlyr, Duoduoduo, NameIsRon, EmausBot, WikitanvirBot, K6ka, ZéroBot, Anir1uph, D.Lazard, Thedoctar, Chewings72, GrayFullbuster, Booradleyp, ClueBot NG, Wcherowi, Joshuajohnson555, Barunman, Profesjonalizm, Brirush, Bg9989, Arvind asia, Proof2015sat and Anonymous: 61 • Inequality (mathematics) Source: https://en.wikipedia.org/wiki/Inequality_(mathematics)?oldid=672523768 Contributors: AxelBoldt, Bryan Derksen, Zundark, Tarquin, Arvindn, Edemaine, Stevertigo, Patrick, Michael Hardy, Mac, Bueller 007, Caramdir~enwiki, Rob Hooft, Pizza Puzzle, Charles Matthews, Dcoetzee, Greenrd, Populus, Omegatron, Mtcv, Ashwin, Cek, Tobias Bergemann, Alan Lieft- ing, Giftlite, Smjg, MSGJ, Everyking, Guanaco, MarkSweep, Colinb, Abdull, Mike Rosoft, Discospinster, Paul August, RJHall, El C, 29.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 177

Nobi, Spoon!, Bobo192, Celeron~enwiki, John Vandenberg, Foobaz, La goutte de pluie, Iltseng, JavOs, Anthony Appleyard, Cyber- Skull, Hoary, Snowolf, Samohyl Jan, Wtmitchell, Allen McC.~enwiki, Oleg Alexandrov, Snowmanmelting, Linas, Shreevatsa, Justin- lebar, JeremyA, Isnow, SixWingedSeraph, Josh Parris, Sjö, NeonMerlin, Yamamoto Ichiro, Titoxd, Mathbot, Jrtayloriv, Quuxplu- sone, Fresheneesz, Alphachimp, Wavelength, Laurentius, Maelin, Eraserhead1, Michael Slone, WAvegetarian, KSmrq, Wimt, Nawl- inWiki, Brian Crawford, Schmock, Ezeu, Semperf, Googl, Sandstein, Theda, Josh3580, Curpsbot-unicodify, Ghazer~enwiki, Capitalist, SmackBot, Melchoir, Isaac Dupree, SMP, BrendelSignature, Nbarth, Kungming2, Can't sleep, clown will eat me, Yidisheryid, Cyber- cobra, Jiddisch~enwiki, Hammer1980, Mwtoews, Mathmannix, Ligulembot, Lambiam, Havantoth, Jim.belk, Llamadog903, 16@r, Irn, Andreworkney, Ginkgo100, Iridescent, Walton One, Igoldste, Blehfu, Dto, Fvasconcellos, ChrisKnott, CBM, Andrew Delong, Mya- suda, Doctormatt, Gogo Dodo, A Softer Answer, Goldencako, Xantharius, Hanche, Rab V, Thijs!bot, Epbr123, Knakts, Radjenef, Marek69, AntiVandalBot, Luna Santin, Mhaitham.shammaa, Salgueiro~enwiki, Canadian-Bacon, Rbb l181, Husond, PhilKnight, Magio- laditis, VoABot II, Ling.Nut, Bcherkas, David Eppstein, Glen, DerHexer, Gwern, RaitisMath, R'n'B, LedgendGamer, Tgeairn, J.delanoy, Abecedare, Jabunga, Ali, DavidMichaelFabian, Xyz9000, Feedback, Haseldon, WHeimbigner, NewEnglandYankee, SJP, Policron, Chi- nasociology, C. Foultz, KGV, VolkovBot, Pleasantville, JohnBlackburne, Ilovespikey~enwiki, Jobu0101, Rei-bot, Anonymous Dissi- dent, Oysterofamerica, Martin451, Figureskatingfan, Wikiisawesome, Dangiankit, Inductiveload, Life, Liberty, Property, AlleborgoBot, SieBot, Gorpik, Portalian, RJaguar3, Smsarmad, Mogga, Flyer22, Momo san, Oxymoron83, Nuttycoconut, OKBot, Svick, Mrberry- man, Amahoney, Denisarona, Francvs, DhZZZWhy!!!??, ClueBot, Admiral Norton, D13579d, Gaia Octavia Agrippa, Zacharychung, AirdishStraus, Bellatrix Kerrigan, DragonBot, Alexbot, KC109, SerMSYS, Ember of Light, Computer97, Nosorryjustno, Alchemist Jack, MrT68, Rror, Avoided, Hm29168, Tiffany 4k, Addbot, Kwvan, C3r4, Fluffernutter, MrOllie, MrVanBot, LaaknorBot, Favo- nian, Numbo3-bot, OlEnglish, Jarble, SimoneMLK, LuK3, T-Rithy, Luckas-bot, Yobot, Wikipedian2, Eric-Wester, Oldboygaryman, AnomieBOT, Phlyght, Jim1138, AdjustShift, Citation bot, Double Blind, Twri, GB fan, ArthurBot, ClassFall08, Capricorn42, Wper- due, What!?Why?Who?, Jeffrey Mall, Mlpearc, Maddie!, Shirik, Amaury, Charvest, Doulos Christos, Youmils03, N419BH, Shadow- jams, WaysToEscape, FrescoBot, BartłomiejB, Chrisgaluzka, Cbunix23, Ongar the World-Weary, Vernon2009, Jauhienij, Toolnut, Nic macapi, Robg9000, Nemesis of Reason, Bobby122, DARTH SIDIOUS 2, Retalliate, EmausBot, Kpufferfish, Britannic124, GoingBatty, K6ka, ZéroBot, Crazy runner, Quondum, Eniagrom, SporkBot, ChuispastonBot, Wakebrdkid, ClueBot NG, Wcherowi, Helpful Pixie Bot, Scwarebang, Calabe1992, Krenair, Exploger, Roblaw42, OpossumK, Stephan Kulla, Lugia2453, Jochen Burghardt, Alfred E. Neumann 1, Yardimsever, Poofairy, Jaredtylerwarren, Queenbwest, Loraof, Anuj1123, KasparBot, BU Rob13 and Anonymous: 391 • Inequation Source: https://en.wikipedia.org/wiki/Inequation?oldid=672362175 Contributors: AxelBoldt, Bryan Derksen, Tarquin, Patrick, Michael Hardy, Mxn, Charles Matthews, Jitse Niesen, Shizhao, RedWolf, Henrygb, Smjg, RussBlau, Henry W. Schmitt, Oleg Alexandrov, Linas, Apokrif, Quuxplusone, Mordicai, Googl, SmackBot, RDBury, Melchoir, Drunken Pirate, Lambiam, Bjankuloski06en~enwiki, 16@r, George100, WillowW, Xantharius, Mhaitham.shammaa, Salgueiro~enwiki, JAnDbot, Althai, N4nojohn, LordAnubisBOT, To- myDuby, SieBot, ClueBot, Jld1188, Hans Adler, WikHead, Addbot, Kongyi, Ptbotgourou, FxStryker, DSisyphBot, Thehelpfulbot, Ban- htrung1, Duoduoduo, Filip Albert, TuHan-Bot, Donner60, ClueBot NG, Scwarebang, Fylbecatulous, Jochen Burghardt and Anonymous: 26 • Proofs involving the addition of natural numbers Source: https://en.wikipedia.org/wiki/Proofs_involving_the_addition_of_natural_ numbers?oldid=647139965 Contributors: Toby Bartels, Aenar, Oleg Alexandrov, Woohookitty, Linas, Ruud Koot, GregorB, Salix alba, Jasonglchu, Zvika, NickelShoe, SmackBot, RDBury, Betacommand, Foxjwill, JRSpriggs, CBM, Skittleys, MER-C, Falcor84, Digitalr, Tassedethe, DrilBot, Jochen Burghardt and Anonymous: 6

29.7.2 Images

• File:3rd_roots_of_unity.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/39/3rd_roots_of_unity.svg License: Public domain Contributors: Based on File:3rd-roots-of-unity.png by User:MarekSchmidt Original artist: Nandhp • File:A_plus_b_au_carre.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c2/A_plus_b_au_carre.svg License: CC BY- SA 2.0 fr Contributors: Own work Original artist: Alkarex • File:Algebraic_equation_notation.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Algebraic_equation_notation. svg License: CC BY-SA 3.0 Contributors: PC generated image Original artist: Iantresman / Iantresman at English Wikipedia • File:Algebraproblem.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/7a/Algebraproblem.jpg License: CC BY-SA 3.0 Contributors: Transferred from en.wikipedia; transfer was stated to be made by User:RSkyhawk. Original artist: Sweetness46 (talk) Original uploader was Sweetness46 at en.wikipedia • File:Ambox_globe_content.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/Ambox_globe_content.svg License: Pub- lic domain Contributors: Own work, using File:Information icon3.svg and File:Earth clip art.svg Original artist: penubag • File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do- main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs) • File:Ambox_rewrite.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Ambox_rewrite.svg License: Public domain Contributors: self-made in Inkscape Original artist: penubag • File:Aspect-ratio-4x3.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/de/Aspect-ratio-4x3.svg License: Public do- main Contributors: own work, manual SVG coding Original artist: Tanya sanderson • File:Associativity_of_real_number_addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f6/Associativity_of_real_ number_addition.svg License: CC BY 3.0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla) • File:Binary_logarithm_plot_with_ticks.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/17/Binary_logarithm_plot_ with_ticks.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala • File:CarlyleCircle.svg Source: https://upload.wikimedia.org/wikipedia/en/e/e3/CarlyleCircle.svg License: CC-BY-SA-3.0 Contribu- tors: Created using GeoGebra Previously published: First time Original artist: Krishnachandranvn 178 CHAPTER 29. PROOFS INVOLVING THE ADDITION OF NATURAL NUMBERS

• File:Cartesian-coordinate-system-with-circle.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2e/Cartesian-coordinate-system-with-circle. svg License: CC-BY-SA-3.0 Contributors: ? Original artist: User 345Kai on en.wikipedia • File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Origi- nal artist: ? • File:Commutative_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/36/Commutative_Addition.svg License: CC-BY-SA-3.0 Contributors: self-made using previous GFDL work by Melchoir Original artist: Weston.pace. Attribution: Apples in image were created by Melchoir • File:Commutative_Word_Origin.PNG Source: https://upload.wikimedia.org/wikipedia/commons/d/da/Commutative_Word_Origin. PNG License: Public domain Contributors: Annales de Gergonne, Tome V, pg. 98 Original artist: Francois Servois • File:Completing_the_square.ogv Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/Completing_the_square.ogv License: Public domain Contributors: Own work Original artist: Lucas V. Barbosa • File:Completing_the_square.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/Completing_the_square.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala • File:Complex_cube_root.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/62/Complex_cube_root.jpg License: Public domain Contributors: made with mathematica, own work Original artist: Jan Homann • File:Coniques_cone.png Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Coniques_cone.png License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Cube_root.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/02/Cube_root.svg License: Public domain Contribu- tors: Own work Original artist: Qef,ElectroKid (☮ • ✍) • File:Cubic_graph_special_points.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d4/Cubic_graph_special_points.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Cmglee • File:Difference_of_two_squares.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/14/Difference_of_two_squares.svg License: Public domain Contributors: Own work Original artist: Krishnavedala • File:Difference_of_two_squares_geometric_proof.png Source: https://upload.wikimedia.org/wikipedia/en/a/a1/Difference_of_two_ squares_geometric_proof.png License: CC0 Contributors: ? Original artist: ? • File:Differenceofcubes.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/Differenceofcubes.jpg License: CC BY- SA 3.0 Contributors: Own work Original artist: Sandy Kumar • File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango! Desktop Project. Original artist: The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although minimally).” • File:Equation_illustration_colour.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b2/Equation_illustration_colour. svg License: CC0 Contributors: Own work Original artist: Maschen • File:Factorisatie.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Factorisatie.svg License: CC BY-SA 3.0 Contrib- utors: Own work Original artist: Silver Spoon • File:First_Equation_Ever.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/First_Equation_Ever.png License: Pub- lic domain Contributors: http://www-groups.dcs.st-and.ac.uk/~{}history/Bookpages/Recorde4.jpeg Original artist: Robert Recorde • File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc- by-sa-3.0 Contributors: ? Original artist: ? • File:Graph_of_cubic_polynomial.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/15/Graph_of_cubic_polynomial. svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala • File:Graphical_interpretation_of_the_complex_roots_of_cubic_equation.svg Source: https://upload.wikimedia.org/wikipedia/commons/ a/ae/Graphical_interpretation_of_the_complex_roots_of_cubic_equation.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Olin • File:Intersecting_Lines.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c0/Intersecting_Lines.svg License: Public do- main Contributors: Own work Original artist: Jim.belk • File:Invariance_of_less-than-relation_by_multiplication_with_positive_number.svg Source: https://upload.wikimedia.org/wikipedia/ commons/0/03/Invariance_of_less-than-relation_by_multiplication_with_positive_number.svg License: CC BY 3.0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla) • File:Inversion_of_less-than-relation_by_multiplication_with_negative_number.svg Source: https://upload.wikimedia.org/wikipedia/ commons/0/0f/Inversion_of_less-than-relation_by_multiplication_with_negative_number.svg License: CC BY 3.0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla) • File:Linear-equations-two-unknowns.svg Source: https://upload.wikimedia.org/wikipedia/en/c/c8/Linear-equations-two-unknowns. svg License: CC-BY-SA-3.0 Contributors: Computer generated Original artist: Iantresman • File:Linear_Programming_Feasible_Region.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0c/Linear_Programming_ Feasible_Region.svg License: Public domain Contributors: Own work Original artist: Inductiveload • File:Log.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ea/Log.svg License: Public domain Contributors: en wikipedia, uploaded by Elmextube who claims to be the author Original artist: Elmextube 29.7. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 179

• File:Lorentz.PNG Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Lorentz.PNG License: Public domain Contributors: Own work. Créé avec Chaoscope Original artist: Rogilbert • File:Merge-arrows.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/52/Merge-arrows.svg License: Public domain Con- tributors: ? Original artist: ? • File:MonkeyFaceFOILRule.JPG Source: https://upload.wikimedia.org/wikipedia/commons/0/0c/MonkeyFaceFOILRule.JPG License: Public domain Contributors: http://en.wikipedia.org/wiki/File:MonkeyFaceFOILRule.JPG. Original artist: Brandenads (talk) • File:NegativeI2Root.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/56/NegativeI2Root.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Loadmaster (David R. Tribble) • File:Niccolò_Tartaglia.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/Niccol%C3%B2_Tartaglia.jpg License: Pub- lic domain Contributors: ? Original artist: ? • File:Omar_Kayyám_-_Geometric_solution_to_cubic_equation.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5e/ Omar_Kayy%C3%A1m_-_Geometric_solution_to_cubic_equation.svg License: Public domain Contributors: Own work Original artist: Pieter Kuiper • File:Parallel_Lines.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/14/Parallel_Lines.svg License: Public domain Con- tributors: Own work Original artist: Jim.belk • File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open- Clipart Original artist: OpenClipart • File:Percent_18e.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f9/Percent_18e.svg License: Public domain Contrib- utors: Created with the DejaVu Sans font and Inkscape. Original artist: Farmer Jan and bdesham • File:Pi-equals-circumference-over-diametre.svg Source: https://upload.wikimedia.org/wikipedia/en/4/40/Pi-equals-circumference-over-diametre. svg License: CC-BY-SA-3.0 Contributors: On my PC Previously published: n/a Original artist: Iantresman • File:Polynomialdeg2.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Polynomialdeg2.svg License: Public domain Contributors: Original artist: Original hand-drawn version: N.Mori

• File:Polynomialdeg3.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a3/Polynomialdeg3.svg License: Public domain Contributors: Polynomialdeg3.png Original artist: N.Mori • File:Polynomialdeg5.png Source: https://upload.wikimedia.org/wikipedia/commons/c/cd/Polynomialdeg5.png License: CC-BY-SA- 3.0 Contributors: ? Original artist: ? • File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Original artist: ? • File:Pythagorean_theorem_-_Ani.gif Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Pythagorean_theorem_-_Ani. gif License: CC BY-SA 3.0 Contributors: Own work (Original text: I(AmericanXplorer13 (talk)) created this work entirely by myself.) Original artist: AmericanXplorer13 (talk) • File:Quadratic-equation.svg Source: https://upload.wikimedia.org/wikipedia/en/e/e3/Quadratic-equation.svg License: CC-BY-SA- 3.0 Contributors: Computer generated Original artist: Iantresman • File:Quadratic-linear-equations.svg Source: https://upload.wikimedia.org/wikipedia/en/3/3d/Quadratic-linear-equations.svg License: CC-BY-SA-3.0 Contributors: Computer generated Original artist: Iantresman • File:Quadratic_root.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/Quadratic_root.svg License: CC BY-SA 3.0 Contributors: Created with graphics software Original artist: Iantresman • File:Quartic_h_shift.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2e/Quartic_h_shift.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: Krishnavedala • File:Quartic_hv_shift.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Quartic_hv_shift.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: Krishnavedala • File:Quartic_v_shift.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Quartic_v_shift.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: Krishnavedala • File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors: Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist: Tkgd2007 180 CHAPTER 29. PROOFS INVOLVING THE ADDITION OF NATURAL NUMBERS

• File:Radical_equation_equivalence.svg Source: https://upload.wikimedia.org/wikipedia/en/1/1f/Radical_equation_equivalence.svg Li- cense: CC-BY-SA-3.0 Contributors: Computer generated Original artist: Iantresman • File:Regular_257-gon_Using_Carlyle_Circle.gif Source: https://upload.wikimedia.org/wikipedia/commons/a/a9/Regular_257-gon_ Using_Carlyle_Circle.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Aldoaldoz • File:Regular_Heptadecagon_Using_Carlyle_Circle.gif Source: https://upload.wikimedia.org/wikipedia/commons/e/e7/Regular_Heptadecagon_ Using_Carlyle_Circle.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Aldoaldoz • File:Regular_Pentagon_Using_Carlyle_Circle.gif Source: https://upload.wikimedia.org/wikipedia/commons/9/96/Regular_Pentagon_ Using_Carlyle_Circle.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Aldoaldoz • File:Rhombus_understood_analytically.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b1/Rhombus_understood_ analytically.svg License: CC0 Contributors: Own work Original artist: Incnis Mrsi • File:Riemann_surface_cube_root.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Riemann_surface_cube_root. svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Leonid 2 • File:Semigroup_associative.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/Semigroup_associative.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: IkamusumeFan • File:Symmetry_Of_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Symmetry_Of_Addition.svg License: Public domain Contributors: Own work Original artist: Weston.pace • File:Tamari_lattice.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/46/Tamari_lattice.svg License: Public domain Con- tributors: Own work Original artist: David Eppstein • File:Translation_invariance_of_less-than-relation.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Translation_ invariance_of_less-than-relation.svg License: CC BY 3.0 Contributors: Own work Original artist: Stephan Kulla (User:Stephan Kulla) • File:Trig_functions_on_unit_circle.PNG Source: https://upload.wikimedia.org/wikipedia/commons/7/7b/Trig_functions_on_unit_circle. PNG License: CC BY-SA 3.0 Contributors: Own work Original artist: Brews ohare • File:Trigonometric_interpretation_of_a_cubic_equation_with_three_real_roots.svg Source: https://upload.wikimedia.org/wikipedia/ commons/0/07/Trigonometric_interpretation_of_a_cubic_equation_with_three_real_roots.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Olin • File:Vector_Addition.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/Vector_Addition.svg License: Public do- main Contributors: ? Original artist: ? • File:Visual.complex.root.finding.png Source: https://upload.wikimedia.org/wikipedia/en/9/96/Visual.complex.root.finding.png License: PD Contributors: ? Original artist: ? • File:Wiki_letter_w.svg Source: https://upload.wikimedia.org/wikipedia/en/6/6c/Wiki_letter_w.svg License: Cc-by-sa-3.0 Contributors: ? Original artist: ? • File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs), based on original logo tossed together by Brion Vibber • File:.gif Source: https://upload.wikimedia.org/wikipedia/commons/8/88/%E4%B9%9D%E7%AB%A0%E7%AE%97%E8%A1% 93.gif License: Public domain Contributors: Transferred from en.wikipedia to Commons. Transfer was stated to be made by User:Otso Huuska. Original artist: ?

29.7.3 Content license

• Creative Commons Attribution-Share Alike 3.0