• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

0.9 repeater = 1

I wanted to give the lurkers who think that 0.999... != 1 some additional arguments to think through, some additional information about infinity and infinite series.
By arguing that they don't make sense?

I'd also hoped to spark some discourse among those who are more mathematically inclined. Revisiting basic premises can be a very useful exercise, especially when the audience is either unaware of them or unaware of their importance.
The definitions are the premises, and they have been revisited. Or by "premise" do you perhaps mean why those definitions are even there in the first place? Why did mathematicians define "real number" in this way or that?

There are many reasons. One of them is this. Take a line and partition it into two parts, A and B, such that every point in A is to left of every point in B. Then there is a unique point that's to the right of everything in A and to the right of everything in B, excluding itself.

Intuitively, this captures the idea of "continuity" of geometrical lines: that you can't add a point "between" other point; that they're all "already there" in this sense. Appropriately enough, this geometrical property is sometimes called the "Dedekind axiom" (or in a somewhat different but equivalent form, "completeness axiom") and is in the inspiration behind constructing real numbers out of the aforementioned Dedekind cuts (in which case completeness becomes a theorem rather than an axiom).

So the short of it is that the real numbers were constructed explicitly to reproduce geometrical notions about lines. Of course, one can ask why these notions of continuity or completeness are important, and the only answers to that is (1) they make things simple (2) the things they make simple are actually useful (e.g., completeness is the backbone of working with limits, which is the backbone of calculus, which in turn is very useful...).
 
Well said, each of you. Perhaps my questions had some value after all.

FWIW, a set of defined operations and axioms is not how non-mathematicians normally look at math. This is the disconnect, and why (imho) so many go away shaking their heads and refusing to believe. In their minds, they're saying, "But you're not proving anything, you're just saying it." They go away thinking that 0.999... still does not equal 1 because it just can't. That it follows both naturally and ineluctably from other definitions they do accept doesn't matter, because they don't look at these truths as being definitional, but rather as useful operations upon real quantities.

I think the biggest ah-ha moment one must experience to grow from math user to mathematician is realizing the formal nature of math, and understanding in your gut that it's a system of symbol manipulation rather than a system of counting or dividing real-world objects.
 
I think the biggest ah-ha moment one must experience to grow from math user to mathematician is realizing the formal nature of math, and understanding in your gut that it's a system of symbol manipulation rather than a system of counting or dividing real-world objects.

Yes, this is exactly right.

I think the best explanation of this notion can be found in the book Godel, Escher, Bach, by Douglas Hofstadter.
 
I think the biggest ah-ha moment one must experience to grow from math user to mathematician is realizing the formal nature of math, and understanding in your gut that it's a system of symbol manipulation rather than a system of counting or dividing real-world objects.
Had you said "as well as" instead of "rather than", you'd have been entirely right.

Logic and mathematics have at least this much in common: Both formalize principles of argument that have been observed to lead from true premises to correct deductions, while forbidding forms of argument that have been observed to lead from true premises to incorrect conclusions.

In its earliest days, the axiomatic method of mathematics was closely connected to systems of counting and measuring/dividing. Building upon that foundation, mathematicians have extended the axiomatic method to many other domains.

One of the distinguishing features of "modern" mathematics is the willingness to consider things that have no known applications. Some pure mathematicians, such as G H HardyWP, have even boasted of the uselessness of their work. (In his less hyperbolic moments, however, Hardy acknowledged that the distinction between pure and applied mathematics has little to do with utility.) As it happens, however, Hardy's own research has found applications ranging from quantum physics to cryptography. If you attempt to dismiss modern mathematics as meaningless symbol manipulation, then it will be hard for you to understand why pure mathematics so often finds real-world applications long after the mathematics had been developed.
 
Actually I think about physics (and other natural sciences) as a way of telling which part of mathematics applies on which part of that science.
You can for example say (mostly you don't) that you can sum apples. 1kg of apples and 1 other kg of apples is 2kg of apples. But that is not mathematics .. mathematics is 1+1=2. You must specify exactly what applies to what, and to what extent. For example 10kg of enriched uranium and other 10kg wont give 20kg .. but more like 20kt :-D
 
One of my favorite books of all time. If it's not the top spot, it's definitely on the short list.

I feel the same way. I usually just say that I think it's the single greatest non-fiction book ever written, period.

It's kind of funny. GEB is the kind of book that, if it were about another topic, might spawn a cult.
 
I think the moment of "click" for me in distinguishing mathematics as a formal system designed to DESCRIBE reality (or, alternatively, to describe anything that follows enough rules that a formal system is useful) was finding out that there were true but undecidable propositions.
 
I think the moment of "click" for me in distinguishing mathematics as a formal system designed to DESCRIBE reality (or, alternatively, to describe anything that follows enough rules that a formal system is useful) was finding out that there were true but undecidable propositions.

But is that really the right way to say it? What is "true"?

My understanding of Godel's theorem is more or less that it says that in any sufficiently complex formal system, there exist well-formed statements such that neither the statement nor its negation can be derived as a theorem.

"Truth" does not come into it, since formal systems do not deal in truth. Under the aegis of any formal system, a string of symbols may have one of four possible statuses*:

1) Not well-formed.
2) Theorem.
3) Negation is a theorem.
4) Undecidable.


* states? statii?
 
Erratum: I should have said "to the left of everything in B." That's what I get for posting under insomnia.
They go away thinking that 0.999... still does not equal 1 because it just can't. That it follows both naturally and ineluctably from other definitions they do accept doesn't matter, because they don't look at these truths as being definitional, but rather as useful operations upon real quantities.
I really don't think that it follows from the definitions they do accept, but rather a disconnect between the intuitions people already have. For example, most people have no trouble picturing space as infinitely and continuously divisible. That's a geometrical intuition people have had since at least 23 centuries ago, to the point that it's only in relatively recent times that the realization that "geometry" could even possibly refer to something different (curved rather than flat, discrete rather than continuous, finite instead of infinite, etc.) hit them.

If they're shown something like this:
26979-004-CF3F4DA2.gif

I suspect that most people won't have any trouble "seeing" that 1/2+1/4+1/8+... = 1, and that the reaction of "but... infinity!" will be far less common. You can see the entire thing at once; the contrary reaction is irrelevant.

Yet this question is exactly alike. In fact the above is a geometric proof that the binary .111~ = 1. Cases like these are called geometric series, and one can make make a pretty picture for the decimal case of 9/10 + 9/100 + 9/1000 + ... just as well, although it won't be as nicely symmetrical as this one.

I think the biggest ah-ha moment one must experience to grow from math user to mathematician is realizing the formal nature of math, and understanding in your gut that it's a system of symbol manipulation rather than a system of counting or dividing real-world objects.
I agree in principle. But that doesn't mean that we can't often apply intuitions about real-world objects... or rather, some idealized abstractions of such. Much of mathematics was designed to capture some of those properties, after all. As a mathematician, one either does applicable math or finds formal connections and generalizations between other maths, so in practice the real world is usually "somewhere down the chain." Thus, it shouldn't be too surprising that mathematics made from other mathematics (with no explicit connection to the real world) sometimes turns out to actually find relevance to the world later on.
 
I feel the same way. I usually just say that I think it's the single greatest non-fiction book ever written, period.

It's kind of funny. GEB is the kind of book that, if it were about another topic, might spawn a cult.

Interesting. I read it years ago and it left very little impression. I'll have to try it again.
 
But is that really the right way to say it? What is "true"?

My understanding of Godel's theorem is more or less that it says that in any sufficiently complex formal system, there exist well-formed statements such that neither the statement nor its negation can be derived as a theorem.

"Truth" does not come into it, since formal systems do not deal in truth.

Sure they do - I'd say the entire point of formal logic is to determine the truth of statements given some set of axioms.

What Gödel's first incompleteness theorem showed is that in any consistent logical system that isn't too trivial, one can formulate statements like

sufficiently strong logical system said:
this statement cannot be proven.

If it can be formulated, that statement is either true but cannot be proven, or the logical system it is formulated in is inconsistent.
 
Last edited:
I think the moment of "click" for me in distinguishing mathematics as a formal system designed to DESCRIBE reality (or, alternatively, to describe anything that follows enough rules that a formal system is useful) was finding out that there were true but undecidable propositions.
But is that really the right way to say it? What is "true"?
Yes. Gödel's first incompleteness theoremWP is proved by constructing a sentence that basically says "I am unprovable within the specific formal system in question." That's true, but unprovable.

(If it were provable, then the sentence would be false, meaning the system would be capable of proving a false statement, which would contradict the assumed consistency of the system. Under the explicit assumptions of the theorem, therefore, the sentence cannot be provable within the system. In other words, the sentence must be true.)
 
But, for Darwin's sake ... why?

As in "why is Gödel's incompleteness theorem at all important?"

Well, before Gödel, there was a believe that Mathematics could prove everything mathematical. Hilbert, around the turn of the last century, speculated that even Physics could be reduced to a few axioms and then everything true in Physics could be derived and proven mathematically.

Gödel ended all that. Anything at the level of arithmetic and above cannot be both consistent (i.e. free of contradiction) and complete (able to prove all true statements about itself).
 
0.999... is a limit: by completeness of the real numbers and since the sequence (0.9, 0.99, 0.999, 0.9999,...) is increasing and bounded above (by 1 in particular), this sequence converges to some number say x which we denote by 0.999... If x is not equal to 1, then 1-x>0 and so y< y + (1-x)/2 <1 and so there is a number between x and 1. But for every number less than 1, we can find n sufficiently large so that 0.999...9 (n times) is between that number and 1. Contradiction. So x=1.

Sorry if this was posted before.
 
Last edited:
Anything at the level of arithmetic and above cannot be both consistent (i.e. free of contradiction) and complete (able to prove all true statements about itself).

Small clarification: there are theories that are sufficiently complex and consistent and complete - in fact, any consistent theory can be made complete (Lindenbaum's lemma).

The actual problem, as Gödel showed, is not that such theories wouldn't exist, but that they aren't recursively enumerable. So they're of little use to us (like in the physics context you noted), but they do exist as mathematical entities (like other noncomputable things, such as the Busy beaver function for example).
 
Last edited:
Small clarification: there are theories that are sufficiently complex and consistent and complete - in fact, any consistent theory can be made complete (Lindenbaum's lemma).


Be careful here. Although it has been a long while since I dealt with such things formally, I recall that the meaning of "complete" as used in Gödel's completeness theorem (and Lindenbaum's lemma) is slightly different from its meaning in the incompleteness theorem.
 
Small clarification: there are theories that are sufficiently complex and consistent and complete - in fact, any consistent theory can be made complete (Lindenbaum's lemma).

The actual problem, as Gödel showed, is not that such theories wouldn't exist, but that they aren't recursively enumerable. So they're of little use to us (like in the physics context you noted), but they do exist as mathematical entities (like other noncomputable things, such as the Busy beaver function for example).
What's more, you need an equivalent of the axiom of choiceWP just to prove the completion always exists. Skeptics who (quite properly!) regard the axiom of choice as unproven are free to regard the existence of Lindenbaum's completed theories as equally unproven.

Be careful here. Although it has been a long while since I dealt with such things formally, I recall that the meaning of "complete" as used in Gödel's completeness theorem (and Lindenbaum's lemma) is slightly different from its meaning in the incompleteness theorem.
Lindenbaum's lemma and Gödel's incompleteness theorems involve the same notion of completeness: a theory is complete if and only if it is consistent but adding any new sentence to the theory would make it inconsistent. With Gödel's incompleteness theorems, the theory is incomplete because you can add the Gödel sentence to it while retaining consistency.

Gödel's completeness theorem says every valid first-order formula is provable. Putting that together with his incompleteness theorems, we conclude that the first-order Gödel sentences must not be valid. From that, we conclude that the first-order Gödel sentences must be false in some nonstandard model of arithmeticWP, so those non-standard models play the villain in this story.

ETA: A first-order Gödel sentence is true in the standard model of arithmetic, but unprovable because it's false in some non-standard model.
 
Last edited:

Back
Top Bottom