<<

The 0.999…=1 fallacy.

This fallacy has its origins in Euler’s wrong ideas. The only comes from Euler’s definition of infinite sum, which is both illogical and spurious. Ignorant academics have unfortunately subscribed to Euler’s wrong ideas and a sewer load of theory has arisen therefrom. But what can one expect from a man who believed that = 0 ?

After the advent of Georg Cantor’s poisonous ideas on theory and non-existent real numbers, things grew progressively worse.

A proof for simpletons:

Consider that if ≠ 0.333 … , then 0.999 … ≠ 1.

The support for the = 0.333 … equality comes from ignorance. However, for this short proof I will assume that it is correct. After I debunk the equality, I will proceed to show that non-terminating decimals are a fallacy that is based on incorrect logic (more like the lack of it).

Proof:

We know that:

3 1 1 = () = 1 − 10 3 10

Now followers of Euler (academic morons) like to think that

3 1 = 10 3

In which case, the direct implication is that (∞) = , but as we already know, (∞) does not exist as a well-defined concept. If indeed (∞) were possible, then the graph of () = 1 − would have no asymptote. In fact, the concept of limit in this respect, would also not exist.

Therefore, the equality = 0.333 … is ill-formed and false.

Radix representation:

Representing numbers in radix systems is just a way of finding another equivalent fraction in that radix system. For example: to represent in 10, we write 0.25 which means + = = . It is impossible to find an equivalent fraction for in base 10, because 3 is not one of the prime factors of 10. To argue ignorantly that + + + ⋯ = , is claiming that this statement regarding prime factors is FALSE. + + + ⋯ is not an equivalent fraction because the ellipsis is meaningless nonsense. is an ill-formed concept that has no place in rational thought and is not required in any mathematics.

Debunking the popular notion that = . … because of long division.

I begin this proof by stating that the belief that = 0.333 … has absolutely nothing to do with long division.

I suppose that it would be helpful to see why long division works in the first place. mathematics professors have no idea why it works. So they should pay careful attention to this proof. The long division algorithm has its origins in Euclid’s Algorithm.

Assume that and are two integers greater than 0. Furthermore, let denote the quotient and the remainder (if any).

For long division to work, we must have:

= ( × )10 + ( × )10 + ( × )10 + ⋯ + ( × )10 + ( × ) +

So,

= × … +

= + where 0 ≤ <

This last statement is exactly Euclid’s Algorithm.

So how do we apply it to ?

1 = (3 × 0) + 1

Now, where in the previous statement do you see anything resembling that process mathematicians use to find 0.333 … , that is, the so-called long division algorithm? Nowhere!

You might be tempted to argue as follows:

= × +

1 = × 3 +

= × 3 +

= × 3 + where the quotient is given in red, the divisor in green, the remainder in light brown and the dividend in blue.

Whence, 1 = + + + .

Observe that = 3, and is always greater than . Furthermore, no stopping condition exists because is never equal to 0.

The reason for this, is that long division has nothing to do with the process ignorant academics refer to in arriving at the fallacy = 0.333 … The process used to arrive at the ill-formed conclusion that = 0.333 … is as follows:

1 1 10 10 1 = × = × 3 3 10 3 10

1 1 1 = 3 + × 3 3 10

1 1 1 1 = 3 × + × 3 10 3 10

1 1 1 1 10 = 3 × + × × 3 10 3 10 10

1 1 10 1 1 = 3 × + × × 3 10 3 10 10

1 1 10 1 = 3 × + × 3 10 3 10

1 1 1 1 = 3 × + 3 + × 3 10 3 10

1 1 1 1 1 = 3 × + 3 × + × 3 10 10 3 10

So, we can continue in this fashion or notice that the 3s repeat.

Or we can be smart and simply write: = 0.3( ) + × where is an integer greater than or equal to 0.

The method just described has ABSOLUTELY NOTHING to do with long division. Yet idiot academics have been claiming this for the last few hundred years. Strange? You decide.

Representing magnitudes and numbers:

Since is a well-defined number, one ignorantly supposes that it can be represented in any radix system, in particular base 10 as 0.333 …

This assertion is false because placing an ellipsis (…) behind the last 3 does not make it valid or meaningful. Infinity does not belong in well-defined concepts. In fact, infinity has no place in mathematics.

1 We can well define 3 as follows in base 10:

1 1 1  0.3(n times)   n  3 3 10  [Well defined]

Now if one puts any of n 1,2,3,4,5,... into [Well defined] and uses the same common false proof, that is,

x0.999... 10 x  9.999... 9 x  9 x  1, then choosing n  7 gives: 1 1  1. x  0.3333333   7  3 10 

1  2. 3x  0.9999999    10,000,000 

9999999 1  10,000,000 3. 3x     10,000,000 10,000,000  10,000,000

4. 3x  1

1 5. x  3

1 1 0.9999999  3x  1 Observe from step 2 that 10,000,000 because , 1 x  so 3 .

Now, let's choose n   (  is an ill-defined concept) and repeat the process:

1 1  1. x  0.333...     3 10 

1  2. 3x  0.999...    10 

999... 1  999... 1 10 3. 3x         10 10  10 10

4. 3x  1

1 5. x  3

And once again, even by using the ill-defined concept of infinity, we have the same 1 3x 1  x  result, that is, 3 which is what we started out with.

Euler incorrectly defined an infinite sum to be:

a arn a a S lim  lim   0  n1r n  1  r 1  r 1  r

Where does Euler go wrong in his definition of infinite sum?

a arn a1 a S lim  lim   lim   0101    n1r n  1  r 1  r n  10 1  r

a 1 1 But we know that the difference  1   0.999..., 1 r 10 10  1 1 that is, 1  0.999...  1  0.999...  , therefore 1 0.999... 10 10 

1 Step 2 reveals Euler's error. In his definition, he simply discards , that is, Euler 10 treats this as zero, but it is not zero! This misconception arises from the fact that 1 lim 0  n . n  10

However, mathematicians don't really understand limits that well. In fact all of them don't know what is a number.

What the last expression means is: 1 As n increases without bound, the least value (greatest lower bound) that 10n cannot attain is 0 .

1 This is the correct interpretation of limn  0 . n  10

When one uses well-defined objects, the results are always consistent. Note that step 2 proves 0.999... 1.

Yet another proof:

1 1 1  0.3(n times)   n  Take the limit of both sides of 3 3 10  : 1 1 1  lim lim 0.3(n times)  lim  n  n3 n  n   3 10 

The previous statement is equivalent to:

1 1 1  lim 0.333...  lim  n  n3 n   3 10  [E]

According to Euler's formula for an infinite sum:

3 1   1 n   3 3 3 10 10     0.333... lim  lim 2  ...  n  n9  n  10 10 10    10 

1   1 1  0.333... lim   lim   n  n3  n   3 10 

Putting this in [E]:

1 1   1 1   1 1  lim lim   lim  n   lim   n  n3 n  3  n   3 10  n   3 10  [CR]

1 1  3 3

1 It can be seen that discarding the portion of 3 , that cannot be measured exactly in 1 1  lim   n  base 10, that is, n 3 10  , would have prevented us from reaching the correct conclusion of the RHS of [CR].

1 1  lim  n  0 Simply replacing n 3 10  with is therefore incorrect, that is, Euler's definition of infinite sum is flawed.

A lesson on fractions.

After several discussions with academics on time and the forum:

What does it mean to represent rational numbers in radix systems?

I realized that all of them did not understand what it means for a to be represented in a radix system. So I decided to include this section to clarify some of these details.

The long division algorithm was used by mathematicians to facilitate the conversion of a rational numbers of the form into some radix form b

n n1 b1 b 2 b3 bn an10 a n1 10  ...10  a 1  a 0     ...  . 10 102 10 3 10n

It was soon discovered that rational numbers whose denominator was not a factor of the radix could not be represented finitely in that radix system, that is, it was not a possible to represent the remainder of , because it cannot be measured by b the finite sum ONLY of the products of integers and reciprocal powers of that radix.

1 A common misconception is that  0.333... 3 1 It is easy to convert to base 10 if we use the following definition: 3

1 1 1  0.3(n times)...   n  3 3 10  where n  and n  1. The problem occurs when one tries to have n   :

1 1 1  0.333...     3 3 10 

1 1 1  Thus, is equal to 0.333...    , not just 0.333... 3 3 10 

1 1 1  is that part of that is not measurable in powers of tenths, 3 10 3

1 that is, cannot be expressed as the finite sum only of products of natural numbers 3 and powers of tenths.

1 So what do you think happens when one converts to decimal (base 10)? 3

1 Answer: One is measuring in powers of tenths. 3

1 As another elementary example: suppose you wish to find the representation of in 3 base 15. This means you want to it in fifteenths, two hundred and twenty fifths, etc. So, you begin with fifteenths because it is the first column to the right of the base 15 radix . It is not hard to see that

1 1 1 1 1 5 1       15 15 15 15 15 15 3

1 On the other hand, it is not possible to represent (measure) completely (finitely) in 4 powers of fifteenths:

3 4 1 We notice that the most fifteenths we can have is because  . 15 15 4

So,

1 1 1 3 1      0.210 15 15 15 15 5

1 1 But we are still short because  . 5 4

Next we want to see how many two hundred and twenty fifths we should add:

1 11 56 1    5 225 225 4

So we are still short. Continuing, we have:

1 11 3 843 1     5 225 3375 3375 4

1 1 1  And so on. It is thus evident that 0.3(11)15    . 4 4 225 

1 1 1  0.3(11)n times    Or in general,    2n  . 415  4 15 

1 1 1 1   is that part of that cannot be measured in powers 4 225 900 4

1 of fifteenths, that is, there is no method to write as a sum: 900

1 b1 b 2 bn   ...  900 15 152 15n

where b1, b 2 ,... bn are natural numbers, that is, bi  .

1 0 0 3 11 3 11       ... 900 15 152 15 3 15 4 15 5 15 6 is not an answer because it is not finite.

One might think that

1 1 1  0.003(11)15   4  900 4 15 

1 is a representation, but this is not true because is not an integer, that is, no b can 4 i 1 be (or is allowed to) equal to because b  . 4 i

Continue reading to see how all the fake proofs are debunked.

The flawed Set Theoretical definition of number:

I first want to expose the fraudulent definition of number in just a few pages. It's important because many of you reading this are mathematics professors or educators who may think you know what is a number. But before I expose the nonsense of set theory in this regard, I urge you to read why real numbers don’t exist.

Irrational numbers don't exist:

Proof using mainstream mathematics, that irrational numbers do not exist.

Now back to my discussion on set theory:

Incompetent modern mathematicians define the first few numbers this way:

0={} 1= 0 = { {} }

2= 0,1 = { {} , {{}} } (*) 3= 0,1,2 = { {} , {{}} , { {} , {{}} } }

A. Academics claim that every set has as a , viz. the , which incidentally cannot be defined in the way non-empty sets are defined. Moreover, none of the 9 so- called axioms of set theory, define set, never mind the empty set. If {} is the empty set, then what sense does the set {{}} make? Answer: Nonsense. The object {{}} contains the empty set as a subset and also as an element. Confused? I am not surprised. Of course this makes perfect sense to academic ignoramuses who think they can just define concepts without careful thought.

B. What does 0={} and 1={0}={{}} tell you about the magnitudes 0 and 1? Answer: Nothing.

What makes {{}} different from {}? The fact that {} is contained in {{}} is false, not to mention completely illogical, for many reasons, but the one that comes to mind instantly, is the fact that 0 cannot be contained by any magnitude. It has no . Your lecturer will tell you that {} is not the same as {{}}, but as you can see, this is absurd, because 1={{}} implies that 1 contains two empty sets since the empty set is by definition a subset of every set.

C. Let's overlook A and B by believing these statements (*) are true. This being the case, we have a faux definition of numbers that tells us nothing about measure, only about containment and a seriously misguided understanding thereof. How do we compare sets with the same types of base object? Do we count the number of empty sets in a given set? But this is absurd because each set has only one empty set as subset by the definition of its founders.

D. We all know 2 is twice as large as 1, but how can we tell from this representation:

1={ {} } 2={ {}, {{}} }

Along the same lines, is 1={ {} } therefore twice as large as 0={}? Absurd to say the least. The incompetents who invented set theory only count the same elements once! Wait a moment, did I say count? There is a problem here, because academics are trying to define number and are already using the word count. No matter from what perspective you look at this misguided, moronic rubbish, it never makes any sense. Reason: it is not well-defined.

Using this complex and ill-defined set theory definition for number, there is no sensible way of comparing any two sets that represent numbers. Set theory is an epic failure because it is a failed attempt to define numbers in terms of containment rather than measure.

Try comparing 1 and 2 : you would first have to remove all similar "elements", but if you do this, then you up with:

1={} and 2={ {{}} }

Suddenly 1 looks the same as 0 ! But what does { {{}} } mean? Wow!, it seems that the number 1 must change back into an element when it was first defined as a set ! No consistent method of comparison! No consistent definition of the underlying mathematical objects! All you have is sheer confusion. But this is not at all surprising considering that the father of set theory was a bipolar Jew (Georg Cantor).

The hilarious Peano axioms were a feeble and juvenile attempt to define the properties of arithmetic on natural numbers, and then to define numbers themselves. Only thing is, the Peano axioms assume the existence of natural numbers, and do not define irrational or real numbers.

Cantor also had some wrong ideas about bijection and of sets.

The well-formed definition of number.

What is qualitative and quantitative measure? By comparing two magnitudes qualitatively, one can usually tell whether they are equal or not, by visual observation.

Qualitative measurement can be inaccurate. For example, if a mathematician were able to discern that two magnitudes are not equal, it would still be impossible to determine how much larger or smaller the one is from the other, that is, a difference is not well- defined.

To define difference, the brilliant Ancient Greeks began by formalizing the process of comparison, that is, they formed ratios of magnitudes.

For example:

magnitude (object A) : magnitude (object B) which means literally:

The magnitude of A compared with the magnitude of B qualitatively, that is, one of two outcomes only:

magnitude(A) = magnitude(B)

or

magnitude(A) not equal to magnitude(B).

Note that the magnitudes are unknown till this stage. They are only being compared qualitatively (by visual observation).

An incredible breakthrough occurred when they discovered the abstraction of a unit, from which the natural numbers were born. A unit is defined very simply as the comparison of any magnitude with itself or another equal magnitude, that is,

magnitude (X) : magnitude (X) = UNIT (**)

(**) This great accomplishment led to quantitative measurement.

Now, it was possible to tell the difference between two magnitudes being compared (provided both are measurable in whole units), and a known symbol could be associated (having a known magnitude value) with both magnitude(A) and magnitude(B), that is, both magnitudes are now comparable quantitatively as natural numbers. The difference is simply the number of units the smaller number is exceeded by the larger. If the numbers are the same, the difference is 0.

Note that the magnitudes are known as natural numbers at this stage. They are being compared quantitatively (by finding the difference in terms of a ).

Recap: We started with an unknown magnitude (X) and arrived at a quantitative measurement of X in terms of natural numbers. A natural number is a ratio X : N where X is a ratio of measurable magnitudes and N is a unit.

Now, this great knowledge was insufficient where incommensurable magnitudes  , 2,e , etc are concerned. That is, incommensurable magnitudes can be represented only by approximations through use of fractions (or radix systems).

But this did not prevent from trying to measure the magnitudes that refused to be ratios.

The Archimedean Property states:

If x is any magnitude, then there exists a well defined natural number N such that x < N.

Note that I use the word magnitude - because a is not well-defined. Archimedes knew nothing about real numbers. To him they did not exist. The Archimedean property is misunderstood by most professors of mathematics. The property was not intended to be used as a prelude for the concept of a least upper bound of a set which is bounded above. Its purpose was to establish a method for measuring incommensurable magnitudes by means of approximation. Archimedes’ propositions 3 and 4 On Spirals were plagiarized by both Dedekind and Riemann in their formulation of Dedekind Cuts and Upper/Lower sums respectively.

For example, Archimedes knew that his method of exhaustion would never result in a value greater than some rational number that is larger than  . The property is stated more accurately as follows:

If x is any magnitude, then there exists a commensurable magnitude N such that x N .

This last definition uses commensurable magnitude because natural numbers are an abstraction derived from ratios of commensurable magnitudes - a fact that every professor of mathematics does not know, unless such a professor learned it from me.

The Archimedean property can be generalised further:

If x is any magnitude, then there exist commensurable magnitudes M and N such that M x  N .

What does the Archimedean property mean?

The property simply states that given any commensurable or incommensurable magnitude, a commensurable magnitude exists which is greater. The gist is that given any magnitude X , there is a commensurable magnitude N , which is greater than the magnitude X .

Archimedes knew he could start with some fraction less than   1 and determine its value to whatever accuracy he desired.

Thus, the objects known as real numbers today, are not numbers unless these represent well-defined ratios.

If you think that Cauchy sequences and Dedekind cuts resolve this problem, think again! See my article on magnitude and number.

The flawed Real number concept.

To equate every imaginable magnitude to an ill-defined point on a straight line, demonstrates lack of logic and mathematical aptitude.

One is able to measure magnitudes through the idea of ratio (comparisons). It is theoretically (*) impossible to mark off on a straight line. As an example, if x is a point on the real , then how would one mark off the predecessor or successor of x? The only way one might denote x itself on the real number line, is by means of a previously measured magnitude (). Thus, while it is possible to denote measured magnitudes, it is impossible to denote all magnitudes on the real number line, because not all magnitudes are measurable (completely, that is)!

(*) Consider that even marking off the √2 on the number line, is not a reification of the point, which has no extent or . √2 must be associated to a number. In other words, in order to reify the point √2, one must have measured (not approximately, but completely) √2 using established numbers, that is, the rational numbers.

Although the concept of distance arrived through the introduction of ratios, it was not formalized until the Pythagoreans, who well-defined distance in terms of area (the 2 2 2 distance formula or Pythagoras' Theorem: a b  c ). The area concept was initially understood in terms of rectangles or squares. Thus, an idea that started out as the shortest path between two points was later formalized in terms of area. The measurement of area was possible because of the properties of parallel lines. To understand this, begin with a resting between two parallel lines; next, show that the area of a triangle (formed by the square diagonal and its sides) is half that of the square; next, show that the areas of squares are equal to those of parallelograms on the same base and finally, prove Pythagoras' Theorem.

The previous idea is one of the foundational concepts of all geometry and was used throughout Euclid's Elements.

How does it make sense to use an ill-defined concept in Georg Cantor's idea of countability? Cantor stated that a set is countable if it can be placed into a one-to-one correspondence with the natural numbers, that is, a bijection. While one can count well- defined mathematical objects, such as rational numbers, it is absurd to count ill-defined objects. If real numbers are points on a straight line, then of course they cannot be counted, because points have no dimensions. For example, even if one could find the first point, there would be no way of getting to the previous or next point. So, Cantor's theorem tells us an obvious fact: one can't count ill-defined objects. Ignorant academics have for many decades dwelled on these ill-conceived Cantorian ideas that lead to nowhere. There is nothing remarkable about the countable idea. In fact, if a real number could be defined using a given radix system, then one can prove real numbers are countable, because radix systems can be used only to represent rational numbers in a well-defined way, since rational numbers are well-defined! Recurring and non-repeating decimals are not well-defined numbers, which finally brings us to the main topic of this article:

Proof that 0.999... is less than 1.

While it is true to say:

n 9 lim 1 n  k k1 10

It is false to conclude that:

 9 0.999...  1  k k1 10

The last statement suggests an infinite sum is possible, but this is false.

We know that for a geometric series with first term a and common ratio r , the sum to n terms is given by:

a arn S  n 1 r [A]

If |r | 1, then we also know that:

a arn a limSn  lim  n n  1r 1  r

Where modern mathematicians (read as: ignoramuses) are concerned,

SS  lim n n

An academic misconception: arn lim A common misconception is that one can simply discard n 1r

a arn a ar n S lim   lim from n1r 1  r n  1  r [SF]

a S  and define the sum as  1r . This kind of thinking is erroneous.

a The RHS of [SF] is ALWAYS LESS THAN 1r .

a a S  S  Therefore  1r . Rather,  1r as n gets very large.

Note that n   because  is not a number!

It is ridiculous to claim that anything is "infinitely large" or "infinitely small" because both phrases are meaningless. The first thing that comes to mind here: what is the difference between "infinitely large" and "infinity"? Similarly for "infinitely small" and "zero".

Now back to our proof:

What we want to show is that:

9 9 9 9   ...   1 10 102 10 3 10n for any given n . [P]

From [A]:

n 9 9 1     n 9 9 9 910 10 10  1    ...  S    1  2 3 n n 1   10 10 10 101  10  10

So we can rewrite [P] as:

n 1  Sn 1    1 10 

By mathematical induction:

1 1  S1 : n=1: 1   1 is true. 10 

k 1  Assume Sk : n=k: 1   1 for some arbitrary k 10 

Is Sk 1 true?

k 1  1   1 Working with the left hand side of 10  and the fact that the next term 9 is 10k1 , we have:

1 9 1010k  1091010   k  1 1  1k  k1  k  k  1   k  1  10 10 1010 1010   10 

But

1  1k1   1 10 

Therefore our induction hypothesis is true. However, academics will throw a tantrum at 1  1n   1 this point and claim that one must examine the limit of 10  as n   .

Well, let's humour them and examine the limit by using the ill-defined concept of limit, and also the ill-defined sum to infinity S  , then we have:

1   lim 1n    1 n 10    1  0  1

The previous limit implies that 1 1 which is absurd. Hence 0.999... 1 . One of the main problems with the ill-defined limit, is that it is sometimes a number, and sometimes an incommensurable magnitude. Another problem is that the limit implies

(in the case of 0.999...) that an infinite sum S  is possible, which is of course absolute nonsense. Modern mathematicians who are known for being misguided, defined the sum of an infinite series to be its limit. This idea is evidently flawed in the 0.999.... argument.

In the case of

1   lim 1n    1 n 10  

1 limn  0  one must concede that having n 10 is exactly like treating infinity ( ) as a number.

A very common misconception is that an infinite sum is actually possible. Well, common sense will tell us that no infinite process is theoretically or practically possible. For example, Xeno's faux paradox (it is really not a paradox at all, rather it is an exercise in mathematical illusion by means of flawed and deceptive arguments) led misguided mathematicians to all the wrong inferences.

Xeno's argument is purely theoretical and there is nothing paradoxical about it. The argument involves an infinite process. Reinforced with incorrect inferences that arise from deceptively producing a link to the physical world, it misleads a thinker to conclude that something is not quite right. Of course, if one has to complete a distance x (by walking) and one divides the distance by half each time, theoretically one will never complete the distance. Now back to reality: each time one takes another step, the remaining distance is less than x, hence, when one reaches the halfway mark of the remaining distance, eventually one will be able to take the final step to complete the distance. There is nothing magical or paradoxical about this. To use juvenile American lingo, it's not even cool. Cantorian theory and incorrect ideas about discrete and continuous distances, are based on this rot. There is no such thing as a discrete or continuous distance.

The first incorrect ideas about limits stem from the following limit:

a arn a limSn  lim  n n  1r 1  r

n Although the expression ar becomes very small, it never actually becomes zero in theory. Theoretically one cannot imagine a magnitude becoming infinitely small (it's almost maddening) and the human brain, by attempting to make sense of the same, performs a logical illusion. For example, mathematicians imagine something like the following happening:

However, what they cannot understand, is that the end part of the tail cannot be reached! This means that no matter how many partial sums we compute in the series:

9 9 9   ... 10 100 1000 , no partial sum will ever reach 1; not even at infinity, because there is no such thing as infinity ! Whoever thought of the previous image as a good way of convincing others that 0.999... is equal to 1, did himself a disservice. Since each of the terms get closer and 9 closer to zero, it follows that the largest term will always be 10 and that no matter how many terms are summed, it will be impossible for the sum to reach 1, that is, no carry is ever possible. Tsk, tsk.

Infinity is an ill-defined figment (*) of the human imagination. Through mathematical induction, it is easy to show that no matter how many partial sums are taken (one can only determine partial sums because these are ... finite!), no sum will ever reach 1. How difficult can this be to comprehend...and yet the human brain will try to compensate for concepts that it cannot fathom.

(*) There has never been a human who has been able to well-define infinity. However, throw an ill-defined concept at mathematicians and what they come up with is a sewer load of worthless theory: countable sets, transfinite numbers, etc. What is even more disconcerting is how these mathematicians are rewarded for their non-remarkable "dissertations" on these concepts. Makes one wonder if there really is that much difference between mathematics and Hollywood.

Some common juvenile arguments in favour of the 0.999... and 1 equality are:

x  0.999... 10x  9.999...

9x  9 x  1

I am going to call the previous FALSE proof the 10-EX proof because the second step multiplies both sides by 10.

In the previous argument it is assumed the arithmetic involving an infinite or ill-defined quantity (0.999...) is permissible. In fact, arithmetic operations apply only to well- defined ratios or finite approximations of given magnitudes.

Now to put the previous fallacious argument to rest once and for all, consider the following:

x r 10x 10 r

9x 9 r x  r

Regardless of what r is, the final result will be exactly x r . One could say r cat and from this is follows that x cat . In other words, what you feed or input to the algorithm must be exactly the same as the output. This implies in the case of 0.999... that the two operations:

10x 10 r Multiplication by 10 9x 9 r Adding equal sides of two equations do not produce the expected result. Reason? 0.999... is not a "number"! It is an ill- defined concept masquerading as a number. The two previous statements are guaranteed to correctly only on rational numbers. So, one can expect strange or incorrect results when applying these statements to other mathematical objects that are ill-defined, for example, 0.999...

By the logic just demonstrated, any input to this arithmetical algorithm, that is,

x input 10x 10  input

9x 9  input x  output must by definition be the same as its output. Therefore,

x  0.999... 10x  10  0.999...

9x  9  0.999... x  0.999...

Now in mathematics, if an algorithm produces results contrary to its definition, it should be looked at very suspiciously. It does not take an imbecile more than 10 seconds to realize that if the is valid (which it is, provided numbers are being used), then perhaps something is wrong with the input that produces two apparently different results! The algorithm must function as expected. It should by no means change the appearance of the input. Unless of course, there is something fundamentally flawed with the algebra. I think not. My guess is that there is a lot that is flawed with the thinking of modern academics. The answer is that one obtains absurd results when mixing numbers with the concept of limit, because the limit is sometimes a number and sometimes it is an indeterminate ethereal concept. See my article on magnitude and number:

What is magnitude and number?

Let's examine some more serious holes in the 10-EX proof:

Aside from the first step being wrong because 0.999... is not a number, that is, it’s ill-defined,

x  0.999... [define x] the second step:

10x  9.999... [multiply both sides by 10] is equivalent to adding 9x to the LHS and 9 to the RHS. This implies that 9x  9 , therefore x 1. So after the second step, x 1 is assumed until the conclusion. Multiplication by 10 should NOT imply the conclusion one is trying to prove. This is illogical. One must never use the conclusion one wishes to prove in order to arrive at the conclusion.

Now in this case, multiplication by 10 implies x 1. Why? Well, adding 9x to the LHS and 9 to the RHS produces the same result as multiplying by 10 . But we know from simple algebra, that if we add a quantity to either side, it must be equal. Yes? So, 9x  9 IMPLIES x 1.

But this is what we are trying to prove! Can you see now how wrong this proof is? Furthermore, a product involving an ill-defined number is not permissible. I could say x bird and I will get bird . However, when I introduce a quasi-mathematical object such as 0.999... , I get some strange results, because it 0.999... tries to behave like a number.

The next step performs an ill-defined subtraction:

10x x  9.999...  0.999... [subtract x from both sides]

9x  9

[result from subtraction that was already assumed after step 2]

9 9 x  9 9 [divide by 9]

And finally the “conclusion” which was already assumed from step 2.

x 1 [conclusion from division]

The most convincing evidence that the 10-EX proof is fake is discovered when applying the same algorithm to well-defined numbers (read as: rational numbers not expressed in any radix system).

a x a , b  b a kx k k  b

a (k 1) x (k  1) So, add to the LHS and b to the RHS. This implies that a x  b . So we know that if a rational number is fed into the algorithm, then after the a a kx k x  second step, that is, b , we still have b . Let's see how this fails with 0.999... :

a We cannot express 0.999... as b , so on blind faith we have:

x  0.999... 10x  9.999... We assume 10  (0.999..)  9.999...

Thus, 9x is added to the LHS and 9 is added to the RHS. But we know from simple algebra that whatever we add to the LHS must be equal to the RHS, that is, 9x 9  x  1.

But we know that if 0.999... is rational, then we must obtain the same answer after the second step! Since the answer is not the same, we know that 0.999... is not rational and must be a quasi-number (read as: ill-defined number).

As a final example in this regard:

3 5x  4 30 50x  4 27  45x  4 27 3 3  x    4 45 4  5 20 as expected.

The next fallacious and misguided academic argument is to find a number between 0.999... and 1 which is baloney, because 0.999... is not a well-defined magnitude. While it makes sense to apply the theorems of mathematics to numbers, it makes no sense when ill-defined concepts are used. "Mathematicians" also have an acutely misguided understanding of radix systems. See Appendix.

a 1 The final fallacious argument is that a produces when used as input into the a a: a a previous algorithm. A little known fact is that a means (read as: compared with itself) which is the definition of an abstract unit! In fact, there is a difference between the abstract concept of unit and the measurement concept of unit. The algorithm used fallaciously to prove 0.999... equals 1, requires a well-defined number, that is, a rational number (abstract concept only). There is no rational number a a  0.999... b such that b

(c) John Gabriel Author of the greatest unpublished work in mathematics:

What you had to know in mathematics but your educators could not tell you.

Appendix:

Converting rational numbers to a radix representation means having to describe a well a defined magnitude, that is, k, a , b  , b  0 in terms of a b formed using a given radix. Thus, if  is the radix, then,

n n1 2 1 b1 b 2 b3 ...an a n1   ...  a 2   a 1   a 0     ... [S]  2  3 represents a given rational number k uniquely.

Let's see an example of how one can accomplish this using the well defined rational 1 number in base 10 (decimal). So   10 . 4

1 1 10   4 4 10

1 10 1   4 4 10

1 1  1 2    4 2  10

1 1   1 1  2       4 10   2 10 

1 1   1 1 10  2        4 10   2 10 10 

1 1   10 1  2       4 10   2 100 

1 1   1  2     5   4 10   100 

The above process of finding coefficients is summarized and taught in rote fashion. Most students never see this process demonstrated in school or at university. In fact most professors of mathematics don't know it! However, fraction (read as: rational number) to decimal conversion works because of this process. So, all the ai coefficients are 0 and the bi coefficients are b1  2 and b2  5 with all the remaining bi coefficients 0 . 1 Therefore, the representation of in base 10 is: 4

...000000000000000000000.250000000000... [U]

1 [U] is the one and only representation of in base 10. 4

However, the leading and trailing zeroes are insignificant, therefore we use the 1 shorthand notations 0.25 or .25 to represent . Thus to say that 4 0.25, 00.25, 0000.25, 0.2500000 are all different representations, is not only misleading, it demonstrates a lack of understanding. These are simply different 1 shorthand notations of the same number . It is understood that the exact 4 representation follows the format of [S].

Given that a rational number may be represented by a sum expressed through [S], it follows that any representation that is altered even in one coefficient, produces a different rational number. For example, change any one digit in the representation of 1 , that is, 4 1 ...0000.2500000... to a different digit and it would no longer represent in base 4 10.

Observe that not all rational numbers can be represented in a given radix system in a well-defined way, that is, having a finite representation. For example, certain rational 1 1 numbers such as do not have a finite representation in base 3 , that is,  0.023 , 4 4 1 much as has no finite representation in base 10. 3

From the incorrect ideas of modern mathematicians, radix theory evolved to the redefinition of rational numbers as ill-defined limits, so that magnitudes that are known as irrational numbers and which refuse to be measurable, can be accommodated, rather than how they were initially understood in radix systems, that is, an infinite polynomial from which leading and trailing zeroes are stripped. In the event of a non-terminating fraction, such a polynomial is truncated resulting in the concept of precision. The limit as defined and understood by mathematics professors who teach real analysis has a dual nature. It is a rational number when applied to commensurable magnitudes and some ethereal number when applied to any incommensurable magnitude.

At the fundamental level, converting a rational number into radix form is almost as 1 50 50 1 simple as calculating a percentage. For example,  means that or 2 100 100 2 can be written uniquely in base 100 as ...0000.(50)0000... or as 1 ...0000.5000... in base 10. Similarly, can be written as a per sixtage, that is, 30 2 30  1 per sixtage   where a sixtage = . 60  60

Incompetent mathematicians naturally obfuscate matters. A retort to some of these arguments just presented, might be:

"Well, how is it that we can convert rational numbers in infinite radix representation to a the form of a well-defined ratio such as , a , b  , b  0 ?" b

My response is swift and simple: Ratios of natural numbers are well-defined, so that even when incorrectly using infinite representations, it is possible to use approximations 1 with sufficient accuracy. Let's see an example of the conversion of  0.157 : 4 1 1 1 1 1 1 7 lim   ...   ... 7  7  n 3 5 2n 1 1 48 7 7 7 71 48 72 49 5 5 5 5 5 52 5 lim   ...   ... 7  49  n 2 4 6 2n 1 48 7 7 7 71 48 72 49

7 5 12 1    48 48 48 4

You may now gasp and say

"But you have just used limits in contradiction to several claims you made!" Read section called "Understanding long division" further on to see how this could have been done without limits. 1 Not so. The rational number cannot be finitely represented in base 7. One could 4 have instead argued as follows:

0.101010101... + 0.050505050... ------0.151515151...

Thus, I obtain the same result even with approximations. It is more accurate to argue as follows:

7 5 1   48 48 4 rather than using radix representation.

A trick to convert 0.151515151...7 into a ratio:

x  0.151515151... 100x  15.1515151... 66x  15 15 1 x   667 4

The previous trick works because ratios of rational numbers are well-defined and approximations are being used. Using approximations of incommensurable magnitudes is equivalent to using ratios. This same infamous trick is used to incorrectly prove that 0.999... equals 1. For example, to conclude that 10x=9.999... is possible only if 9.999... is treated as a ratio. But 9.999... is an incomplete representation which does not represent any number.

But why does the trick work for other representations such as the ones demonstrated above?

a , a b , b  0 Well, one can find a ratio b such that a x x x 1  2 3  ... b 10 102 10 3 , but what ratio is equal to 0.999... ?

a a , a b , b  0  0.999... That is, no b exists, such that b .

1  0.999... 1 0.999... To claim that 1 is to assume that !

If you are not convinced, try converting 3.141592653... ( ) to a ratio of natural numbers. It is not possible, because  is not well-defined when represented in any given radix, that is,  is not a rational number!

A common retort by most mathematics academics to the previous paragraph is:

"But I can calculate  to any desired degree of accuracy that I like!"

My response:

No, you imbecile! You can't. You can never compute  to 100% accuracy, not even if you could spend eternity doing this. That you can calculate  to a million trillion gazillion decimal places and although it might make you feel nice and cosy, what you have is not  , but a rational number.

Radix systems were designed to represent rational numbers.

Conclusion:

Some of the problems that arise when trying to think of rational numbers as being defined by a least upper bound (such as a limit):

1. Rational numbers do not have finite representation in every radix system.

2. This scheme makes it possible to have more than one radix representation for rational numbers. Ask yourself what sense does this make in terms of being well-defined. If 1 and 0.999... both represent the same number, your radix system immediately loses one of its original features, that is, unique representation, which helps one to compare different rational numbers. For example, 0.800456 is greater than 0.800436 because 5>3. Suppose you have two non-repeating, non-terminating radix representations which you have never encountered (that is, not  , 2 , e , etc). In order to compare these two representations, you would have to compute the limits which you may not be able to do with 100% accuracy. How then can you compare these "numbers"? Answer: You can't.

3. Trying to extend this idea to include incommensurable magnitudes (such as irrational numbers) introduces a complete new set of problems. For starters, the limit designating an incommensurable magnitude is no longer a number! If it can't be completely measured, then it is not a number. Irrational "numbers" are not numbers! Answering Comments/Objections from readers:

A misconception involving the conversion of fractions to decimal form:

In elementary school (I don't know about the USA because it's so far behind, but where I was reared, this was the case), children are taught to convert (well-defined) fractions into decimal format, that is, the base 10 radix system.

a Educators begin with a well-defined ratio b and teach a rote method for converting 1  0.25 the same into radix form. For example, 4 10 . To understand why this works, go back to page 15 of this document where a detailed example is given. Students are taught the lesson as follows:

0.25 4 10 8 20

20

2 5 1   This example is well-defined since 10 100 4 .

It's the next example which is absolute rubbish:

0.33... 3 10 9 10

9 1...

1 There exists no finite number of steps using 0.333... to arrive at 3 which is well- defined. In fact, even if one could sit down forever and compute the partial sums of the 3 3 3 1   ... series 10 100 1000 , it will never be possible to arrive at 3 . Why?

1. Any process involving an infinite number of steps is ill-defined. 1 2. To claim that the least upper bound (limit) of such a sum is 3 and conclude the sum 1 is equal to 3 displays a complete lack of common sense. The limit is never equal to the sum. In fact, the limit concept is itself ill-defined because in the case of an incommensurable magnitude (aka ), it (the limit) is an ethereal concept without and essentially meaningless.

1  0.333... 3

Rather,

1 1 0.333 3 is represented as an approximation by in base 10. However, 3 can be represented exactly in many other bases. For example:

1  0.1 3 3

1  0.2 3 6

1  0.n3n n 3 where is a natural number.

The Swiss mathematician Euler was the first with wrong ideas in this regard, who influenced the thinking of incompetent modern mathematicians.

A popular fallacy: All "real" numbers can be represented in decimal. The last foolish professor I met who tried to convince his class of these wrong ideas was an ignorant fool from the mathematics department at the University of Cape Town, South Africa. Needless to say, he is not alone in his wrong views. I have proved that the concept of real number is ill-defined - in simple language, real numbers as imagined by ignorant professors do not exist! I must add here that the University of Cape Town mathematics department is one of the most dysfunctional and useless academic departments I have encountered in my life's journey. The university math departments in other countries also leave much to be desired.

A common mistake with inequalities:

While it is correct to say x  3 , it is incorrect to say 2 3 . Many ignorant and misguided mathematics professors are guilty of this error. It's known as "not paying sufficient attention to detail".

Explanation:

x  3 means x  3 OR x  3

Suppose x  2 .

Then, 2 3 . One cannot write 2 3 because this means:

2 3 OR 2 3

The second part of the condition ( 2 3 ) is impossible and therefore ill-defined. When considering x  3 , only one of the operators ( ,  ) or neither is possible when x transitions to a fixed value.

There is no such thing as a strict or non-strict inequality. An expression is either an inequality or it is not. Thus, the expression x  3 is in fact a conditional inequality/equality. While x is unknown, the condition remains unevaluated: it could be either an equality or inequality, that is, x  3 or x  3 .

If x  2 then an inequality is the result, that is, 2 3 .

If x  3 then an equality is the result, that is, 3 3 .

If x  4 then no solution exists, that is, x  3 .

A common mistake with arithmetic:

You would think that a mathematician with a PhD would have a sound knowledge of arithmetic, wouldn't you? You would be wrong.

The arithmetic operators, ,  ,  and  , are all binary operators operating on finite representations of numbers (as defined by the brilliant Ancient Greeks, not the baboons who constitute today's modern academics). The first operator, that is, subtraction, is the most primitive arithmetic operator.

How did we get the subtraction/difference operator?

Go back and study the well-defined definition of number in this document. You must first realize that a difference was not possible (not well-defined), until after ratios and numbers had been well-defined.

All the remaining operators are defined in terms of difference. To understand this better, I suggest you study my new axioms of arithmetic.

John Gabriel's Axioms of Arithmetic: 1. The difference (or subtraction) of two magnitudes is that magnitude which describes how much the larger exceeds the smaller. 2. The difference of equal magnitudes is zero. 3. The sum (or addition) of two magnitudes is that magnitude whose difference with either of the two magnitudes is the other magnitude. 4. The quotient (or division) of two magnitudes is that magnitude that measures (in terms of difference) either magnitude in terms of the other. 5. If a unit is divided by a magnitude into parts, then each of these parts of a unit, is called the reciprocal of that magnitude. 6. Division by zero is undefined. 7. The product (or multiplication) of two magnitudes is the quotient of either magnitude with the reciprocal of the other. 8. The difference of any magnitude and zero is the magnitude. Observe that all the basic arithmetic operations are defined in terms of difference.

Now ask yourself, how is it possible to well-define a sum using ill-defined  2  representations of numbers in decimal, for example, 0.222... and 0.888...  9 and 8  respectively 9  . In decimal arithmetic, one must start with the rightmost digits because there could be carries. But if we do this, we arrive at a result of 1.11111...... 0. No matter how far right we go, we have to somehow imagine an ill-defined carry from *infinity*. However, infinity is impossible to attain, so how could such an operation be well-defined? Well, it can never be well-defined, and if approximations are used, that is, 0.222 and 0.888, then we have incorrect results.

Once again, unbelievably stupid professors of mathematics propound these moronic ideas.

Perhaps the most damning evidence that 0.999... is not equal to 1, is that one can start with a geometric series that has a limit and by using only ordinary arithmetic, arrive at a series that is not geometric, that is, a series whose limit cannot be found in any systematic way.

By arithmetic axiom number 7:

0.999... 10  1 1 10 0.999...

10 1 0.999... Multiply by 10 10 0.999...

1 10 0.999... Simplify  1 0.999... 10

0.999...2  1

So, we avoid using limits because these are ill-defined and proceed as follows: 9 9 9   9 9 9  1    ...       ...  10 100 1000   10 100 1000 

92 9 2 9 2   9 2 9 2 9 2   9 2 9 2 9 2  12  3 4 ...   3  4 5 ...   4  5 6 ...  ... 10 10 10   10 10 10   10 10 10 

2 1 1 1   1 1 1   1 1 1   1 92  3 4 ...   3  4 5   4  5 6  ...  10 10 10   10 10 10   10 10 10  

2 12345678910  1 92  3 4 5 6 7 8 9 10 11 ...  10 10 10 10 10 10 10 10 10 10 

The last statement is a series that is not geometric. And while its partial sums can be evaluated and appear to approach the value of 1  0.0123456790123456790123456790... 81 , it is impossible to find 1 the limit which is 81 , unless of course we already knew this. We can sit down and 1 calculate the partial sums forever and NEVER arrive at 81 . The only way to guess the limit, is to visually observe that the decimal expansion appears to be the same.

What this tells us is that

12345678910  2 3  4  5  6  7  8  9  10  11 ...  is an ill-defined 10 10 10 10 10 10 10 10 10 10  concept when interpreted as an actual sum or number. Therefore, applying well- defined arithmetic operations to ill-defined numbers can and does result in anomalies such as 0.999... = 1. The methods of arithmetic apply to well-defined numbers that have finite representation. There is no such thing as 0.333... or 0.999... or 0.xxx.... Mixing these quasi mathematical objects with numbers produces strange results which are quite frankly not surprising.

No concept involving infinity is well-defined.

Understanding the long division algorithm.

It's important to understand the algorithm because absurd concepts such as 1  0.333.... 3 and 0.999.... have their origins in this algorithm. Given two rational numbers p and d , the division algorithm states:

p qd  r where d is the divisor, q is the quotient and r is the remainder.

The ancient Greeks understood the division algorithm to be a finite process, that is, given two rational numbers, there is exactly one quotient and one remainder.

Example 1: 9 9 2  4  1   2 r 1 4

Example 2: 1 1 0  3  1   0 r 1 3

The algorithm is not an infinite algorithm. The stopping condition is the remainder.

Many centuries later, incompetent mathematicians began doing the following things:

1 1 10   3 3 10

1 10 1   3 3 10

1 1  1 3    3 3  10

1 1   1 1   1 1  3        0.3+     0.33 3 10   3 10   3 10 

1 1   1   1 1   1 1  3     3 2  +   2   0.33+   2   0.333 3 10   10   310   310 

1 Thus, it is generally untrue that  0.333... because at some stage in the ellipsis, we 3 1 1 1  must have 0.33333   n  [CORRECT]. 3n  3 10 

If the fools had paid attention to detail, they would have realized that [CORRECT] is the 1 reason  0.333... 3

Please note that n   because  is NOT a number! And consider what would th happen if indeed n   . This would imply that there is a ZERO in the infinite position of 0.3333... :

Can you see now how ridiculously stupid modern mathematicians really are?

a Thus, any fraction of the form  0.xxx ... is ill-defined nonsense. b

The correct way to add in decimal is as follows:

1 2 Suppose you want to add and : 3 3

1 1 1  2 2 1  0.33333   n  and 0.66666   n  3n  3 10  3n  3 10 

So,

1 2 1 1   2 1   0.33333  0.66666  n     n  3 3n n  3 10   3 10  1 2 1 2  1  0.33333  0.66666     n 3 3n  3 3  10 n 1 2 1  0.99999 n  0.99999   0.0000 1  1 3 3n 10 n nth

As expected, the above method is exact and there are no ill-defined concepts such as infinity. Observe that 1  9 9 9 1 0.99999      ...  0.999...  n  2 3 n 10  10 10 10

Also observe that 1 could be written finitely in base 10 as:

1 1 0.99999   n where n is a natural number. n 10

1 0.99999  While  n is well-defined, 0.999... is NOT well-defined. n 10

1 From an earlier example, we could write in base 7 finitely as follows: 4

1 1 1  0.157   2  4 4 7 

From which it is easy to evaluate the right hand side without limits since it (right hand side) is well-defined:

11  1 5 1 28201  49 1 0.157  2        47  7 49196 196 196 4

Since the beginning of time, incompetent human mathematicians did not know or understand any of these facts which I reveal in this document. Instead they chose ill- defined concepts such as the limit which produces paradoxical results and has resulted in a flawed calculus. 1 As an exercise, write finitely in decimal. 11

A disproof using set theory.

Modern academics are ignorant beyond belief:

They claim [a ]  0.9; 0.99; 0.999;... is a sequence which has no last term.

However, if []a could have a last term, it would be 0.999...

There is NO WAY anyone can disagree with that.

So one would have an infinite set that looks something like

[a ]  0.9; 0.99; ... 0.999;... .

Now []a is bound from above by 1, that means it has a supremum not in the set []a . So what do ignorant academics do?

Define last term of []a to be its limit, that is, a 1. But we know that if there were a last term, it would be 0.999...

Hence, any assertion that 0.999... 1 is supported by this fallacy: a 1. There is NO proof. The 0.999... 1 assertion is supported by a lame-brain ill-formed definition, that is, a 1.

As a final analogy, consider the set of integers D {  4;6;22;31} . We know it is bounded above by 31 and 32 . What sense would it make if one said in this case, 31 32 ? But this is what academics do when they set the "last term", that is, 0.999... 1.

A discussion from Space Time and The Universe: A discussion at the STATU forum.

I know the origins of the word infinite. It's the meaning it has to mathematicians today that is entirely wrong. They imagine it is a process that can be completed. In their puny minds, they think that 0.999... has no last term, but yet they can determine the sum of the infinite series. This is the problem. Common sense says that one would have to sit down and add the terms forever and even so, one would never reach 1 (easily demonstrated by mathematical induction). One would get very close to 1 but NEVER reach it. They are almost the same value but they are NOT. It's impossible to determine a difference between 0.999... and 1 because 0.999... is an ill-defined number just like all repeated decimals.

Mathematicians assert that 0.999... has no last term and then suddenly make the quantum leap to the assertion that

a SS lim n  n 1 r

It's exactly the way incompetent mathematicians think about it. The limit of the sum is not the same as the sum.

As long as you think that a ar n SS limn   lim n1r n  1  r

there is no used continuing this discussion, because S is NOT the sum, it is the LIMIT of the sum. No infinite sum is possible. In fact any process involving infinity (example repeated decimals) is ill-defined.

a For S to equal , this demands that  be treated as a "number", that is,  1 r

n k ar   0 . 

But at the same time any foolish mathematician would balk if anyone suggested 8  0 . In fact, not only is infinity not a number, but the implied arithmetic, that is,  8 is not possible. Why? Because arithmetic applies ONLY to well-defined rational  numbers. This excludes repeated decimals and repeated radix forms in other bases.

So, no need to respond unless you agree with ALL the following: 1. Infinity is not a number  And this is why it's wrong to think it can be approached. 2. Infinity is not a process that can be completed. 1  And this is why it's wrong to think that 0.333... equals or 3.14159... 3 equals  . k 3. Any idea of the form is meaningless and nonsense.  ar n  And this is why one cannot discard from the sum to infinity because 1 r a ar n  lim is NOT the sum. 1rn 1  r

4. Halving a well-defined number is not the same as "halving or proportioning infinity" (a meaningless concept).

9 9 9  1 9 9 9  1      is meaningful BUT   ...   10 100 1000  2 10 100 1000  2 is nonsense

5. The sum to infinity is NOT the same as the limit of the infinite sum. [IMPERATIVE]

 S IS NOT the sum. a ar n The SUM, if it could be defined, is Sn   where n   , but  is not 1r 1  r a number so that one can say n   .

The LIMIT OF THE SUM is

a arn a a limSn  lim  lim   0  n n 1r n  1  r 1  r 1  r

6. The arithmetic property of distribution does not generally apply to an infinite series.

k a b  c ...  ak  bk  ck  ...  unless it is known that

a b  c ... has a last term, such as in a binomial expansion where not all the middle terms can be determined.

7. Most mathematician will argue that the last term in the series 9 9 9   ... is not known. But this is plainly untrue. 10 100 1000

n ar a n1 n  2  Sn   ar  ar ...  ar  a r 1

n1 Therefore, the last term is ar (just reverse the series, since we know a is the first n1 term). This being the case, T  ar which implies that n1 9 10 9 T  ar    . So, from the wrong ideas of 10 10 10 

a Leonhard Euler, the sum to infinity which is ill-defined, got to be S  . But this r 1 9 is only possible if  0 and we know this is 10 not true.

Euler's definition can be compared to the following: One is required to measure the surface area or volume of a cylinder. One covers the cylinder in very thin plastic wrap. One then finds the surface area of the plastic wrap and considers it to be the same as that of the cylinder. Now even though there is very little difference in the surface area without the plastic wrap, there is still a difference. Euler foolishly discards the tiny difference in his flawed definition of infinite sum. This little act of indiscretion leads to major confusion the likes of 0.999... = 1, an obviously FALSE assertion.

So much for the incompetence and gross stupidity of Euler and modern mathematicians who have blindly followed his lead.

 9  k  0.999... [A] k1 10

n 9 lim 1 n  k [B] k1 10

The LHS of [A] is not equal to the LHS of [B]. So how can the RHS of [A] be equal to the RHS of [B] ? Answer: 0.999... 1

As a final note:

One could define S as follows:

 ar a n1 n  2 2 S  ...  ar  ar  ...  ar  ar  a r 1 where n  0 and n  .

n1 n  2 2 a limSn  lim ...  ar  ar  ...  ar  ar  a  . n n  1 r

Observe however, that

n1 n  2 2 ...ar  ar  ...  ar  ar  a is NOT the same as:

n1 n  2 2 a lim ...ar  ar  ...  ar  ar  a  n 1 r

(c) John Gabriel Author of the greatest unpublished work in mathematics:

What you had to know in mathematics but your educators could not tell you.

The spirit of the Archimedes lives in me. I may very well be the last great mathematician.