• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

0.9 repeater = 1

I'll just take post 18 for an example. This is the one that says 1/3 + 1/3 + 1/3 = 1, so therefore 0.333... + 0.333... + 0.333... must, too.

This relies on 0.333 equaling 1/3. While this may indeed be true in some sense, it also demonstrates the inability of converting fractions to decimal representations for all cases. The repeating fraction is the indication that the division algorithm failed.

Even if you get by that, you cannot add 0.333... plus anything, for the reasons elaborated in my last post. The addition algorithm involves lining up the right-most digits, starting your addition there, and carrying the results to columns to the left.

Since one cannot start at the right-most digit of 0.333..., one cannot add anything to it.

Remember that I'm playing the Advocate here -- I agree that the rules of math say that 0.999... and 1.0 are two different ways of writing the same number. However, I maintain for the sake of argument that this is not a derived truth but a starting assumption. It is insusceptible of proof, because the normal algorithms of addition, subtraction, and multiplication are undefined for repeating decimals. (It's a philosophical point, not a mathematical one.)

If you think I'm wrong, please point out the right-most digit of 0.999.... If you cannot do so, please point out the alternate algorithm for addition that allows you to start the operation at a non-right-most column.
 
I'll just take post 18 for an example. This is the one that says 1/3 + 1/3 + 1/3 = 1, so therefore 0.333... + 0.333... + 0.333... must, too.

This relies on 0.333 equaling 1/3. While this may indeed be true in some sense, it also demonstrates the inability of converting fractions to decimal representations for all cases. The repeating fraction is the indication that the division algorithm failed.
I agree that the 1/3 = 0.333... is a poor approach for those who are unfamiliar with the nature of infinite mathematical patterns. It would seem on the surface to be a circular argument. The concept of infinity is hard to wrap one's mind around. Note that I did not say infinite "repeater" or "repeating". This would imply a sequence. 0.333... and 0.999... are not sequences. They do not "go on forever", they simply "are". They are decimal numbers whose representations contain an infinite number of digits, all of whom exist simultaneously, not as a sequence.

Even if you get by that, you cannot add 0.333... plus anything, for the reasons elaborated in my last post. The addition algorithm involves lining up the right-most digits, starting your addition there, and carrying the results to columns to the left.

Since one cannot start at the right-most digit of 0.333..., one cannot add anything to it.

Remember that I'm playing the Advocate here -- I agree that the rules of math say that 0.999... and 1.0 are two different ways of writing the same number. However, I maintain for the sake of argument that this is not a derived truth but a starting assumption. It is insusceptible of proof, because the normal algorithms of addition, subtraction, and multiplication are undefined for repeating decimals. (It's a philosophical point, not a mathematical one.)

If you think I'm wrong, please point out the right-most digit of 0.999.... If you cannot do so, please point out the alternate algorithm for addition that allows you to start the operation at a non-right-most column.

The fallacy in this aproach is that the "addition algorithm" involves aligning the corresponding digits with respect to the decimal point of each, not the rightmost digit.
 
I'll just take post 18 for an example.
There's a reason why post 18 was not among the list of posts I recommended.

This relies on 0.333 equaling 1/3.
That's the reason. It assumes a fact that's equivalent to the fact it purports to prove.

Since one cannot start at the right-most digit of 0.333..., one cannot add anything to it.
That's false. There are many different algorithms for adding finite decimals, and there also exist correct algorithms for adding certain kinds of infinite decimals, notably repeating decimals.

It is insusceptible of proof, because the normal algorithms of addition, subtraction, and multiplication are undefined for repeating decimals. (It's a philosophical point, not a mathematical one.)
No, it's a mathematical fact that the normal algorithms (by which you must mean the ones with which you are familiar) are undefined for repeating decimals. It is also a mathematical fact that certain other algorithms for operating on repeating decimals are both well-defined and correct.

If you think I'm wrong, please point out the right-most digit of 0.999....
Now you're the one who's assuming an equivalent of the thing you wish to prove. You are wrong, but there is no right-most digit of the repeating decimal 0.999... . You have incorrectly assumed that the only way you can be wrong is for there to be a right-most digit in that repeating decimal.

If you cannot do so, please point out the alternate algorithm for addition that allows you to start the operation at a non-right-most column.
The meaning of the repeating decimal 0.999... is the value of a certain infinite seriesWP, so the algorithms for operating on infinite series apply to it. In particular,

[latex]
\begin{eqnarray*}
10 \times 0.999...
& = & 10 \times \Sigma_{i=1}^{\infty} (9 \times 10^{-i}) \\
& = & 10 \times \lim_{n \rightarrow \infty} \Sigma_{i=1}^{n} (9 \times 10^{-i}) \\
& = & \lim_{n \rightarrow \infty} (10 \times \Sigma_{i=1}^{n} (9 \times 10^{-i})) \\
& = & \lim_{n \rightarrow \infty} (10 \times ((9 \times 10^{-1}) + \Sigma_{i=2}^{n} (9 \times 10^{-i}))) \\
& = & \lim_{n \rightarrow \infty} ((10 \times 9 \times 10^{-1}) + \Sigma_{i=2}^{n} (10 \times 9 \times 10^{-i})) \\
& = & \lim_{n \rightarrow \infty} (9 + \Sigma_{i=2}^{n} (9 \times 10^{1-i})) \\
& = & \lim_{n \rightarrow \infty} (9 + \Sigma_{i=1}^{n-1} (9 \times 10^{-i})) \\
& = & 9 + \lim_{n \rightarrow \infty} \Sigma_{i=1}^{n-1} (9 \times 10^{-i}) \\
& = & 9 + \lim_{n \rightarrow \infty} \Sigma_{i=1}^{n} (9 \times 10^{-i}) \\
& = & 9 + 0.999...
\end{eqnarray*}
[/latex]

Unless I made a mistake, every step of the above calculation is justified by a definition or theorem of mathematics.

Letting k=0.999..., the above says 10k=9+k, so 9k=9, so k=1. Hence 0.999...=1.
 
(i realize this is the fifth page of an old thread, but I have never really thought about this before now)

Do I have this right?


The "distance" between .999... and 1 is infinitely small.

Another way of saying infinitely small is 1/infinity.

1/infinity = 0.

So: the "distance" between .999... and 1 = 0.

So: They are the same number

This makes perfect sense to me. Is this an accurate way to think of it?
 
W.D. Clinger,

The meaning of a mathematical expression is whatever we agree on. It's not intrinsic.

By the way, one might want to consider that using infinity in ordinary calculation leads to undefined results, too. For example, 1 * infinity = infinity. 2 * infinity = infinity. Since infinity == infinity, therefore 1 == 2.

The only way you can prevent this is to disallow the operations or declare the result meaningless (which any sane person would). Note, however, that you allow the same type of undefined behavior is your algorithm for operating on an infinite series because you introduce infinity as one of the terms. Even though it's common sense (and basic math) to see that infinity on one side cancels the infinity on the other, just as any normal number would, it's a huge leap of faith -- completely unsupported by any real-world application -- to use infinity this way. It's solely a definitional truth, used because it makes calculations more useful. It is not based on any experience with countable items.

I suspect the reason most mathematicians get so upset with people on this question is that they (the mathematicians) don't understand where the non-mathematicians are coming from.

It's not so much incredulity or stubbornness, as it is insistence that math exists in order to abstract from, manipulate, and then re-relate the answer to reality.

For an ordinary person, taking three apples and splitting them among friends means that each friend gets one apple, not that each friend gets 0.333... of three apples. In the real world, if you split one apple three ways and are relatively exact about the split, one friend gets 0.333, the second gets 0.333, and the third gets 0.334. Apples are not infinitely splittable, nor are they amenable to esoteric notions of mathematical piety.

The fact is that, except in computation, infinity doesn't exist (oh, perhaps the universe is infinite, I suppose, but since we can't measure it, we can't know, and, even if we knew, we can't split it or multiply it, so operations on the universe as a whole are undefined).

An ordinary bloke who gets 0.333... as the answer to his long devision problem may use the word "infinite," but what he means is that it's conceptually required that he be able to keep going that way for ever, but not practically possible, for one cannot continue infinitely (one's pencil lead would give out, if nothing else). It's the point, not at which apperception knocks him cold, but at which he says, "Good enough for the job" and moves on to the next problem.

Those of us with math geekhood in our veins want neat solutions, with rules to handle every possible situation. Secretly, down deep, we want an operation that lets dividing by zero to be meaningful somehow. We also want to be able to use infinity -- a by definition undefined quantity as if it were a regular number we can plug into our existing formulae and go home after a good day's work.

The foundation for using infinity this way simply isn't present, except as a convenience. The numbers work out in the end, and we can usually translate things back to the real world with new information added. That may indeed be sufficient justification for the process, but if so, the philosophy behind it is only pragmatism.

It all yearns towards Heisenberg in the end. What, we can know the circumference exactly but only as long as the poor radius is left irrational? Well, then, by all means declare by fiat the the radius is exatctly one ues. What? We can no longer calculate the circumference? I don't believe you.

But ah, says the mystic robed mathematician, I can indeed tell you the answer. First I must translate your requirements into my arcane and learned scribblings, and perhaps sacrifice a sparrow or two. Then I shall make scribbling leap upon one other and tear each other to bloody bits, using rules only I and the other magi fully understand. No, no, you may not watch; these beasts become irrational, and would attack anything in site. You just sit there, good man, and at the end of the fight, your figure will come stumbling through that door, ready for you to use in your humble, real-world kind of calculations.

But, says the peasant, I have this 14-inch long strap I must use for a radius, so how long must I roll the rim metal to make my perfectly circular wheel.

After argle-bargling for hours, the mathematicians tell him the rim strip must be exactly 2 x [pi] x Radius. The poor peasant can't find [pi] on his measuring string, so he begs the learned lords to explain.

You take your radius, see, says the Lord High Mathematician, and multiply it by two.

I've only got the one.

Well then make another, for when we place the radii lengthwise on the same line, their combined length will equal the diameter. From there we can see that the circumference must equal [pi] times the diameter. Bob's your uncle, laddie. Off you go.

The commoner simple gave up and rolled a stip of metal he thought would probably be long enough. With great care and no help from irrational numbers. "Bit near 3 1/4," he said, and workd from that. He managed to get the wheel complete. His rim, true, was a bit too long, but he trimmed the excess until it was a perfect match.

He then reattached his wheel to his carriage and rode awy from the filth and the muck beside the road, leaving the highly-educated mathematicians still arguing about the impossibily of his having found a workable solution without using [pi]. The peasant, having spent a splendid evening at a pub down the way, came upon them, still arguing, the following day.

Masters, cried the peasant, have you still not solved the problem of how I should meaure my use of tools to form a perfect circle?

We have solved it, fool, said the Highest of the High Muckety Mucks. You are riding upon an approximation of the truth, not the truth itself.

Well, good lord, said the peasant, that's good enough for anyone, in'it?
 
Oh, one last bit for those who are enjoying the exercise: Let's say you want to multiply a number by itself. The number is ...999.999... (that is, nines extending infinitely in both directions).

What extant rules of math grants the right to perform an arithematic operaion on the first multiplicand? Reduce it to its elementary properties, and you'll find that it insists on finding both the left-most and right-most digits in order place the decimal point. Since our number extends infinitely in both directions, any point you pick must, by definition be the center. But then so is any other point you pick. So when you and your friend each pick a point several feet apart, you can each claim to have chosen the exact center, yet clearly have chosen different spots. A no longer equals A, and you've thrown the very concept of identity out. Now all of your other algorithms falter and screech to a halt, for without identity, you cannot formulate your remaining axioms and theories.

Other than by trickery which assumes the very point it's attempting to prove (using infinity in an equation to subtract out another infinity), there remain no well-founded operations to work with infinite decimals.

Remember Douglas Adams' calculator, where if you fed it sufficiently high numbers (say 6 * 10), would calculate the answer as being a "suffusion of yellow"?

Well, it may be that we can take a suffusion of yellow, subtract out a influx of incarnadine, mix in a bit of royal purple from the Nile mud, and come up with 60 for an answer, but to pretend these concepts are well-grounded simply because they produce useful answers is snootiness.

The middle of infinity is undefined, so you can never place your decimal point on the first multiplicand. Even though our secret weapon (Psss! Boris! Divide it by zero! Queeekly before it runs away!) gets hauled out, it fails because to divide something by anything else, you must line up the decimal points and find the extremes.

What's wrong with saying that quantity 1/3 simply can't be reproduced accurately as a repeating decimal? It can be approximated and manipulated as it were a real representation, and after conversion we can convert it back to something meaningful (with a known rounding error), but it's just a imaginary thingamob we made up to try to keep the decimal system consistent.
 
Dallas Dad, that's not the addition algorithm, it's an addition algorithm, but one of many.
 
Correct me if I'm wrong, but aren't all the others based on the same ultimate precepts as this one? If adding by columns didn't work, wouldn't the others fail, too?
 
The meaning of a mathematical expression is whatever we agree on. It's not intrinsic.

Likewise, the meaning of the phrase "mathematical expression" itself is whatever we agree on. I think the phrase "mathematical expression" refers to a soft goat cheese that goes well on salads. Actually, I don't think your statement makes much sense considering the meaning I give to the phrase, but whatever.

The notations in mathematics are standardized for precisely the reason that standard notations make communication of mathematical ideas simpler. Confusion can, of course, arise if you're not familiar with what the notation is used to denote, but in that case you can always ask someone who is familiar with the notation to define a term for you. W.D.Clinger did that for you. If you're still confused (unsure of the meaning of an infinite sum), you can ask for further elaboration. There's no need to go on a sarcastic diatribe against mathematicians.
 
Oh, one last bit for those who are enjoying the exercise: Let's say you want to multiply a number by itself. The number is ...999.999... (that is, nines extending infinitely in both directions).
That doesn't represent a real number. It's not even a 10-adic (in which ...9999 = -1).

What's wrong with saying that quantity 1/3 simply can't be reproduced accurately as a repeating decimal?
It's wrong precisely because we can prove it wrong, given the way we've defined how those terms are to be interpreted. An infinite, positive decimal expansion is by definition the limit of its partial sums, and this particular one is bounded from above by 1/3 and therefore (by the completeness property of real numbers) has a least upper bound. There is a theorem in analysis that says that any monotonically increasing sequence with a lub converges to that lub. It's remains to establish that there is no number less than 1/3 that is an upper bound, and hence 1/3 is the least upper bound (a task made infinitely easier by the Archimedean property of the reals: "for any ε>0, there is an integer n>0 with 1/n<ε").

It should be noted that the completeness and Archimedean properties of the reals are not just assumptions pulled out of a hat. They can be explicitly proven given the way the real numbers are constructed (actually, multiple equivalent ways, most commonly either by Dedekind cuts or Cauchy sequences of rational numbers). Most students of mathematics, however, don't encounter this until their senior year in university.
 
I suspect the reason most mathematicians get so upset with people on this question is that they (the mathematicians) don't understand where the non-mathematicians are coming from.

It's not so much incredulity or stubbornness, as it is insistence that math exists in order to abstract from, manipulate, and then re-relate the answer to reality.
...
For an ordinary person, taking three apples and splitting them among friends means that each friend gets one apple, not that each friend gets 0.333... of three apples. ...
Look, if that's where you're coming from, then you shouldn't even ask the question. Whether 0.999... equals 1 is clearly a question of abstract mathematics because the entity "0.999..." has no meaning besides the one given by abstract mathematics. Therefore, it having nothing to do with actual, physical apples, there is nothing wrong with answering it using highfalutin' mathematics.

If you were really serious about your "farmer and wheel-maker" attitude, then you wouldn't bother with this question in the first place.
 
The number is ...999.999... (that is, nines extending infinitely in both directions).
What makes you think that that's a number?

What's wrong with saying that quantity 1/3 simply can't be reproduced accurately as a repeating decimal?
What's inaccurate about it? When I divide 1 by 3, 0.3333... is *exactly* what I get.
 
Last edited:
By the way, one might want to consider that using infinity in ordinary calculation leads to undefined results, too. For example, 1 * infinity = infinity. 2 * infinity = infinity. Since infinity == infinity, therefore 1 == 2.

Of course, one might want to consider that 1 * 0 = 0, 2 * 0 = 0. Since 0 == 0, therefore 1 == 2.

It's not the use or zero or "infinity" that is wrong here - it's the reasoning that is fallacious.

We also want to be able to use infinity -- a by definition undefined quantity ...

Except that each of the mathematical concepts involving infinity is well-defined, and you know it.

But ah, says the mystic robed mathematician, I can indeed tell you the answer. First I must translate your requirements into my arcane and learned scribblings, and perhaps sacrifice a sparrow or two.

Except that there is nothing mystic or secret about mathematics, and you know it.

What is the purpose of this "exercise"? You said you were going to play Devil's Advocate, so it seems obvious that you don't actually believe the things you're saying. Then can you explain why you're saying them?

I could see some purpose if you were putting forth some relevant arguments, or at least attempted to do so. But now that this has been reduced to little more than plain demagogy, which apparently doesn't even represent what you think, I admit I no longer understand why you are having us read all that, or why we should be supposed to respond to that. Can you enlighten me about your intentions here?
 
Last edited:
The only way you can prevent this is to disallow the operations or declare the result meaningless (which any sane person would). Note, however, that you allow the same type of undefined behavior is your algorithm for operating on an infinite series because you introduce infinity as one of the terms. Even though it's common sense (and basic math) to see that infinity on one side cancels the infinity on the other, just as any normal number would, it's a huge leap of faith -- completely unsupported by any real-world application -- to use infinity this way. It's solely a definitional truth, used because it makes calculations more useful. It is not based on any experience with countable items.

Where in W.D. Clinger's proof does infinity appear as a term in a series?

It seems you have confused infinity appearing as a term in a series with a series with an infinite number of terms. If you object to the latter as well as the former, .999... is meaningless and you have nothing to say. If you don't object to the latter, you are simply incorrect. Which is it?
 
I'm going to play Devil's Advocate here, just for fun.

The reason the multiplication proof fails is that it employs an undefined operation. While moving the decimal point is a well-known shortcut for multiplying by 10, it's just that: A shortcut. It relies on the underlying algorithm, which works correctly in most cases.

I have to disagree with this. It's not just a shortcut, its a consequence of the definition of a positional number system. It doesn't work correctly in most cases, it works in _all_ cases.
 
My purpose was multifaceted.

I wanted to give the lurkers who think that 0.999... != 1 some additional arguments to think through, some additional information about infinity and infinite series. I've often found that looking at a problem from a different angle helps me understand why it didn't make sense from the first angle.

I'd also hoped to spark some discourse among those who are more mathematically inclined. Revisiting basic premises can be a very useful exercise, especially when the audience is either unaware of them or unaware of their importance.

However, I see that the lurkers remain lurkers, and the mathematicians are resting on definitions, so my proffered foray into alternate ways of looking at things amused and enlightened only me.

Mea maxima culpa. The innumerate need a better advocate. Those were the best arguments of which I could think.
 
By the way, I really have no idea what you mean by "is a subset of 1" or what you think are the elements of 1.

I was being lazy at that point in my post. I meant two sets, written later in the post- 0 to 0.9999.... and 0 to 1 (or any number less than 0.99999... and 1 ):

was there an element in one that was not in the other. And if the sets are subsets of each other then they are equal, and then just count each element in a one to one correspondence, and eventually you should arrive at 0.999999... = 1, since every other element in both sets corresponds to an element that is exactly equal to it.

Or that was what I was trying to get at, anyway.

ETA


All the interesting stuff would have to occur between 0.999... and 1, so why not simplify the question: Is there a rational number, X, that is 0.999... < X < 1.

If you prefer it expressed as an interval, is (0.999..., 1) non-empty?


Yeah that is more to the point.
 
Last edited:
...and the mathematicians are resting on definitions...

But that's all there is. Mathematics consists of sets of objects, along with a corresponding structure of axioms and definitions, nothing more.

Given the standard set of definitions and axioms that define the set of real numbers, 0.9~ and 1 are the same by definition, since the value of 0.9~ is defined to be the limit of the sequence {0.9, 0.99, 0.999, ...}, which can be shown to be 1.

It makes no sense to behave as if the assertion that 0.9~ = 1 is "actually" true or "actually" false. It either follows from whatever axioms and definitions we choose, or it doesn't. It is possible to construct alternate number systems in which 0.9~ does not equal 1. In the reals, 0.9~ does happen to equal 1.
 
But that's all there is. Mathematics consists of sets of objects, along with a corresponding structure of axioms and definitions, nothing more.

Given the standard set of definitions and axioms that define the set of real numbers, 0.9~ and 1 are the same by definition, since the value of 0.9~ is defined to be the limit of the sequence {0.9, 0.99, 0.999, ...}, which can be shown to be 1.

It makes no sense to behave as if the assertion that 0.9~ = 1 is "actually" true or "actually" false. It either follows from whatever axioms and definitions we choose, or it doesn't. It is possible to construct alternate number systems in which 0.9~ does not equal 1. In the reals, 0.9~ does happen to equal 1.

Haha, what is math if not a set of definitions and axioms?
 
Yes .. I'm not really sure about the definition, but I remember it was something like this .. two real numbers are the same if there is no other real number between them. And that applies for 0.999.. and 1. It is defined like this. So there is no need for proof. All examples showing that it indeed works like this can only make it more clear, easier to understand, but they are tautologies, without informational value.
 

Back
Top Bottom