• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Open-Ended Math Questions?

Your proof contains an egregious algebraic equation error.


10N - N = (90+900+9000+...) - (9 + (90+900+9000+...)) (subtract N from each side)


What you have done is simply add 9 to your series. I didn't do that in my step 2. What I did is multiply BOTH sides by 10.


I'm sorry, I don't see that.

Code:
N = 9 + 90 + 900 + 9000 +  .....
    = 9 +  (90 + 900 + 9000 +  ....)

therefore, 10N = 9*10 + 90*10 + 900*10 + 9000*10 + ...
                           = 90 + 900 + 9000 + ...

Therefore, N = 9 + (10N)
                   -9N = -9
                     N = -1

The problem isn't with the algebra, but with the undefined results of subtracting an unconvergent sum from another unconvergent sum. And, of course, 0.99999... converges, but the infinite quantities above don't, hence the difference.
 
My proof is sound, solid, and formal enough for the appropriate and intended audience.

The intended audience when I stated my proof was people who are uncomfortable with infinity, such as Soapy Sam...

I 100% agree with you that it is ridiculous to teach younger students about limits and the like, but it's still not a watertight mathematical proof. It may be correct, but you wouldn't get a roomful of mathematicians to bet their houses on 0.9999... = 1 based on it. What if it had multiple values if you cleverly rearranged the terms? What if such a number does not exist, or made no sense? This is why you must be so precise in formal proofs.
 
The intended audience when I stated my proof was people who are uncomfortable with infinity, such as Soapy Sam...
Incidentally, a proof of this which uses limits and convergence does not rely on any infinities or infinitesimals.

limN -> infinity sum1..N 9x10-N
which is what .999... really means, is actually shorthand for,

x such that for all e > 0 there exists M such that for all N > M, x-sum1..N 9x10-N < e.

The infinity in the limit is just annoying notation.

Since real numbers can be defined as the limits of sequences, it's easy to show that the above sequence has the same limit as the sequence {1, 1, 1, ... } and thus is the same real number.

Once again, I don't intend for that proof to have anything to do with 7th graders, I just prefer it for my own quirky personal reasons.
 
Last edited:
This is gibberish. Arithmetic is a form of mathematics; the question in the OP hinges on the understanding of the term "financial outcome." (I've seen identical questions posed using different terms as well.) Similarly, the "average" problem I posed above is an arithmetic problem where the correct answer depends on your understanding of the term "average."

The definitions used in arithmetic are usually sufficiently simple that few people are confused by them -- until you get into questions like "how long is half a piece of chalk," where the meaning of "half a piece" is not clear and needs to be examined.

The only difference -- and it's more apparent than actual -- is that students studying arithmetic are still learning the mechanics of numerical manipulation, and so still need to be tested on it. But the whole point of story problems, from elementary school on up, is to learn to manipulate the linguistic definitions as necessary. What does it reallymean when I say "John gives Mary two apples"? Does that mean that the student should add or subtract?

And the reason that story problems are considered so difficult is because they combine both aspects -- students who are not comfortable with addition alone will hardly be happy with addition plus definitional issues.....


Drkitten- it can't be gibberish if you understand it. Unless...

Arithmetic is not a form of mathematics, it's a part of mathematics. (At least as taught in the west. I daresay a case might be made for teaching it as part of logic, science, or gods forfend- accountancy.)
If we're referring to decimal arithmetic, which is what's normally taught in schools, all the definitions are in place , so there is no ambiguity about the question. The answer is therefore defined by the question.

When two mathematicians use exactly the same definitions of terms to set up a mathematical question, the same is true.

The vagueness sets in when the definitions are not clear. As in the OP. That's when the gibberish starts.
 
A further level of abstraction.

You don't need to use the word infinity at all when talking about 0.999... . 0.999... is usually defined as the limit of the sequence 0.9, 0.99, 0.999, 0.9999, ... i.e. the sequence where term n is 1 - 10^-n. And, using a bit of first-year uni maths, the limit of that sequence is 1. Absolutely no question about it. I'm not sure what complaint you have about infinity...?


Same problem I have with perpetual motion machines.
I keep asking to see the evidence and I keep getting excuses.

I have a simplistic view of number which no mathematician could accept for an instant. Two cows is a quantity. "Two" is the value of the quantity, "cows" is the object of the quantity.

An infinite number of cows is impossible. There are not enough hooves in the universe. The same argument (logical, however silly) would seem to hold for everything except homoeopathic medicine and dancing angels.
I remain sceptical that the concept of infinity has any reality.

"Two" is not a quantity. It's a mental construct. A concept. You can do what you want to with concepts, within the limits of human imagination.
That doesn't make them real.

I'm far from convinced that "number" has any actual reality, outside the brain ( probably including some non-human brains).

I'm open to argument though. And evidence of course.:D
 
No. The series expansion of N contains, as a subset, the series expansion of 10N formed by the simple expedient of moving the decimal point one place to the right (hence producing the extra 0 on the end).

(1)
N = 9/10 + 9/100 + 9/1000 + 9/10000 ... (AS's decimal, as a series)
10N = 10*9/10 + 10*9/100 + 10*9/1000 ... (multiply both sides by 10)
10N = 9 + 9/10 + 9/100 + 9/1000 ... (each term can be simplified)
10N = 9 + N (I recognise that series)
9N = 9 (subtract N from each side)
N = 1 (QED)

I believe this is correct, but it relies on exposure to the concept of expanding the notation of .999... into the sum of the series. Again, that is beyond 7th grade math. It is also entirely unnecessary to understand the proof, and it unnecessarily complicates it. Part of the charm of my proof is its conciseness and simplicity. I really don't understand why this concept is so difficult to grasp.


(2)
N = 9 + 9*10 + 9*100 + 9*1000 ... (GA's series)
10N = 10*9 + 10*9*10 + 10*9*100 ... (multiply both sides by 10)
10N = 9*10 + 9*100 + 9*1000 ... (each term can be simplified)
10N = N - 9 (I reconise that series)
9N = -9 (subtract N from each side)
N = -1 (QED)

In (1) the series is convergent, N therefore has a finite value; for finite numbers, 10N - N is a defined mathematical operation. In (2) the series is divergent and N (in this case) will be oo; for infinity, 10oo - oo is an undefined mathematical operation.

AS presented an algebraic recipe that 'proves' 0.99999... = 1, the same recipe was used by GA to show that 99999... = -1. The recipe may simplistically show that .9999... = 1 but it is not mathematically rigorous.

This is simply incorrect. There is a huge difference in GA's series and mine. GA's series is infinitely large. You are right. It does not converge to a limit. It continues to expand and grow infinitely. It has no limit.

As it does not have a limit, you cannot prove that it is equal to any real number, as GA thinks he has done, thus exposing some alleged flaw in my proof. You and GA both know very well that his series does not converge to -1, and its limit is not -1. Its limit is infinite, and thus undefined in the set of real numbers.

What you and GA seem to misunderstand is that my proof is not trivial. It is not a bunch of magical mystical fairy dust nonsense. It is a true algebraic proof that .999... equals 1. That's all it is. It isn't expressed in terms of series, so please stop trying to redefine it in those terms.

Please get this through your heads: There is no flaw in my proof. It's not as rigorous as some proofs in calculus and analysis. I haven't carefully stated axioms and haven't set forth a sophisticated theorem. I didn't claim such a thing. Nevertheless, there is simply no logical flaw, no algebraic flaw, no numerical flaw, no mathematical flaw in it. Period. It is absolutely sound and absolutely true in the set of real numbers in base 10 that .999... = 1.

I have indeed proven it algebraically. You can use scare quotes all you want to mock it, and you can call it a "recipe" if you like, but you would be wrong to do so. It is not a "proof." It is a proof, and a damn good one too.


N = ...999999
10N = ...999990
10N - N = ...999990 - ...999999 = -9
9N = -9
N = -1

No component parts there that I can see.

That is utter gibberish. What the hell is ...999999?

Perhaps you and others should read wikipedia's entry. It has some comments about several ways to prove .999... = 1, including the fractional method and the algebraic proof I've given. It even contains some comments about common misconceptions about it and common reasons why so many people simply cannot accept that it is true.

http://en.wikipedia.org/wiki/Proof_that_0.999..._equals_1

This is discussed all over the net. It gets quite heated sometimes.

AS
 
If I might turn the heat up just a tad.
Earlier in the thread you mentioned the simple concept of a repeating decimal.

Appreciate that you are dealing with a simpleton here.
I know what 9 is.
I can make a stab at .9, even at .9999 , but I have no idea at all what .999... is.
To me , 0.9 is "Nine tenths of one" and so on. From the context of the thread, .9999... appears to be an infinite number. (not infinitely large. Infinitely long). I don't understand how , without actually expanding it to infinity, we can know what , if anything , an infinitely long number is equal to. We're back at Turing's halting problem.

Either .999... is one , or it not. If it is one, then what is .999... / 2?
 
Same problem I have with perpetual motion machines.
I keep asking to see the evidence and I keep getting excuses.

I have a simplistic view of number which no mathematician could accept for an instant. Two cows is a quantity. "Two" is the value of the quantity, "cows" is the object of the quantity.

An infinite number of cows is impossible. There are not enough hooves in the universe. The same argument (logical, however silly) would seem to hold for everything except homoeopathic medicine and dancing angels.
I remain sceptical that the concept of infinity has any reality.

"Two" is not a quantity. It's a mental construct. A concept. You can do what you want to with concepts, within the limits of human imagination.
That doesn't make them real.

I'm far from convinced that "number" has any actual reality, outside the brain ( probably including some non-human brains).

I'm open to argument though. And evidence of course.:D

Am I correct in saying that your problem with infinity is that it's not physical? I don't think I understand what your viewpoint is.

Mathematicians could not care less if numbers do not exist. They leave these questions to philosophers. :p Mathematics is about asking questions of the form "given assumptions X, Y, Z, what facts follow?" - and hopefully assumptions X, Y, Z are not disprovable or contradictory.

The classic example that most people object to is letting i be the square root of -1. What the heck is that? You can't have i cows. It makes no sense...! But if you drop these qualms and start thinking, "ok, so what if i does exist?" then you find it has fairly interesting and useful properties - in the same way that negative numbers are interesting and useful, even if you can't really imagine -5 cows (your bank would certainly argue so if you take out an overdraft!).

So, fair enough, perhaps numbers don't exist. But they're still damn useful. :D
 
I'm sorry, I don't see that.

Sorry, but it's true. GA performed an arithmetic function on the left side of the equation. He multiplied it by 10, just like I did.

Now, stop thinking in terms of expanding series for a moment. What should GA do on the other side of the equation to keep it an equation (without violating the rules of algebraic manipulation)? Obviously, he should multiply that right side by 10, right? Of course.

What did GA actually do to the right side? He continued to expand his infinitely large series, only he had cheated at his starting point and began at 90, rather than 9. So, to look like he was performing the same kind of arithmetic function on the right side, he expanded his series to the left side of the number scale. He simply showed the 9 that should have been there in the first place. That isn't multiplying the right side by 10. I called it adding 9 to it, but it was really meaningless because his series is divergent, as you note.

You're right in that multiplying it by 10 is undefined. GA set it up that way. I really don't understand the point. I didn't select any random series to manipulate with algebra and arithmetic functions to prove it's 1. I selected the special case of .999... because that happens to be 1. I was merely offering a simple proof that it is so.

Code:
N = 9 + 90 + 900 + 9000 +  .....
    = 9 +  (90 + 900 + 9000 +  ....)

therefore, 10N = 9*10 + 90*10 + 900*10 + 9000*10 + ...
                           = 90 + 900 + 9000 + ...

Therefore, N = 9 + (10N)
                   -9N = -9
                     N = -1

The problem isn't with the algebra, but with the undefined results of subtracting an unconvergent sum from another unconvergent sum. And, of course, 0.99999... converges, but the infinite quantities above don't, hence the difference.

A rose is a rose is a rose by any other name. You cannot perform an arithmetic function on an infinitely large sum and get anything but an undefined result. GA knew that.

The difference here is that .999... isn't infinitely large. It contains a digit after the decimal point that repeats infinitely. We can indeed perform arithmetic functions on it and get real, defined results. We can divide it by 9, or 3, or 1. We can add integers to it. We can subtract other repeating single digit decimals from it.

In real analysis, we learn that its limit is 1. Thus, .999... is 1.

AS
 
Incidentally, a proof of this which uses limits and convergence does not rely on any infinities or infinitesimals.

limN -> infinity sum1..N 9x10-N
which is what .999... really means, is actually shorthand for,

x such that for all e > 0 there exists M such that for all N > M, x-sum1..N 9x10-N < e.

The infinity in the limit is just annoying notation.

Since real numbers can be defined as the limits of sequences, it's easy to show that the above sequence has the same limit as the sequence {1, 1, 1, ... } and thus is the same real number.

Once again, I don't intend for that proof to have anything to do with 7th graders, I just prefer it for my own quirky personal reasons.

Good God, man, you prefer that?

AS
 
The intended audience when I stated my proof was people who are uncomfortable with infinity, such as Soapy Sam...

I 100% agree with you that it is ridiculous to teach younger students about limits and the like, but it's still not a watertight mathematical proof.

Please show me the problem with it and explain why it's a problem.

It may be correct, but you wouldn't get a roomful of mathematicians to bet their houses on 0.9999... = 1 based on it. What if it had multiple values if you cleverly rearranged the terms? What if such a number does not exist, or made no sense? This is why you must be so precise in formal proofs.

That makes no sense. .999...does exist. It is 1. They are one and the same, only expressed differently.

The only qualification you need to make is that it is true in the realm of real numbers in base 10.

AS
 
Well, like I say, I'm a word nerd. As far as I'm concerned, the matter was settled at 1-0.9999...=0.0000...=0, because if X-Y=0, then X=Y.

And the 10N proof didn't make sense to me b/c it appeared to involve a shifty shift.

Your point may be right, but who knows -- I don't trust you much AS, because while I may not know beans about math, I do know this: Don't put your life in the hands of no backwoods Southern lawyer. ;)

He he. OK, Vickie.

You are not the first to accuse me of using something shifty in there. When I was home during a break in college, a fellow math major and I were discussing this proof in a bar. A local university math professor happened to be sitting very near us, and overheard over conversation. He interrupted us and asked me to show him the proof.

I pulled out a cocktail napkin and a pen and wrote it out for him. He tried to show me an error in it, so I showed him again. I must have written it 5 or 6 times for him and my friend, and they both insisted there was some trick to it, but they couldn't identify it (to be fair to the good professor, he was pretty drunk).

The simple truth of the matter is that because it necessarily involves the concept of infinity, it is horribly counterintuitive. Also, we are so used to the conventions of notation that 0.anthing couldn't possibly be equal to 1.0.

Sorry to burst everyone's intuitive bubbles, but 0.999... is 1 in disguise. It is the real 1, not an imposter.

My proof has confounded many very smart persons for decades or longer. I assumed incorrectly when I used it in college at the bar that it was well known among mathematicians. This was in 1984, so it was before the internet as we now know it existed (oh sure, it was ARPANET before then, but was limited mostly to military facilities and universities, and only eggheads used it). The internet has made this proof pretty widely known, at least compared to back then.

AS
 
Sorry, but it's true. GA performed an arithmetic function on the left side of the equation. He multiplied it by 10, just like I did.

Now, stop thinking in terms of expanding series for a moment. What should GA do on the other side of the equation to keep it an equation (without violating the rules of algebraic manipulation)? Obviously, he should multiply that right side by 10, right? Of course.

What did GA actually do to the right side? He continued to expand his infinitely large series, only he had cheated at his starting point and began at 90, rather than 9. So, to look like he was performing the same kind of arithmetic function on the right side, he expanded his series to the left side of the number scale. He simply showed the 9 that should have been there in the first place. That isn't multiplying the right side by 10. I called it adding 9 to it, but it was really meaningless because his series is divergent, as you note.

A rose is a rose is a rose by any other name. You cannot perform an arithmetic function on an infinitely large sum and get anything but an undefined result. GA knew that.

Uh? Your third paragraph is a bit wordy. Dr. Kitten's point is that subtracting "infinity" from "infinity" makes no sense. You are, however, perfectly entitled to multiply "infinity" by 10 and get "infinity" - that's an arithmetic function that's defined. Hilbert's Hotel, anyone?
 
Either .999... is one , or it not. If it is one, then what is .999... / 2?

0.5

If you do the math, you will find that the result you would arrive at from 0.999... is 0,4999...

And of course, if 0.999.... = 1, then 0.09999999 =0.1
Then you just need to add 0.4 regardless of how you have been writing it.
 
Good God, man, you prefer that?
Yes.

You should have seen it before I edited it.

I warned you, but you just have to keep pushing.

And regarding your reply to Piggy... I agree with you. The crazy s($# I'm posting explains why your proof is right. Its the fiddly details you don't think matter, but which fill in all the holes. ;)
 
Please show me the problem with it and explain why it's a problem.

See below. You've done the following, at least: a) invoked infinity casually, making everybody cringe, b) assumed 0.9999... exists and makes sense (i.e. doesn't contradict anything) and c) assumed it has unique value. (Obviously, these are all OK, but you have to justify b) and c) in particular.)

That makes no sense. .999...does exist. It is 1. They are one and the same, only expressed differently.

The only qualification you need to make is that it is true in the realm of real numbers in base 10.

AS
".999... does exist" is a poor argument. So why is .999... justified and not 999...? Plus, did you prove it has unique value?

Answer without using sequences and limits, please. ;)


(This is my point. Your proof is essentially correct, but you need to justify some of your assumptions - assumptions that a "7th grader" will not spot, and doesn't care about. The language of limits and sequences makes such justification unnecessary, because such language is hyperprecise. Also, I hope I'm not being patronising or condescending - I apologise if so. I don't mean to be. I have a certain lack of way with words...)
 
AS - We all agree that .999...=1 (except maybe Soapy Sam, who apparently has trouble dealing with the fact that even though mathematicians do not claim that "infinity" is a number, it still fits perfectly well into a logical framework, and wow I just tried to lead my own post off topic). We all agree that with some added justification of assumptions, your algebra will constitute a rigorous proof. We all agree that 7th graders need not justify these assumptions.

What I did is show that the assumptions *need to be justified* or else nonsensical proofs can be generated. (And don't say things like "you just added 9" because hey, what do you think turning .999... into 9.999... is doing? :)) Your reasoning is elegant. It is simple. It is not wrong. It quickly shows that .999... = 1 to those who are used to providing justifications with little thought for the assumptions you made, or to those who do not understand the need for justifications.

Unfortunately there are many, many people who do not fall into either of those categories.
 
Uh? Your third paragraph is a bit wordy. Dr. Kitten's point is that subtracting "infinity" from "infinity" makes no sense. You are, however, perfectly entitled to multiply "infinity" by 10 and get "infinity" - that's an arithmetic function that's defined. Hilbert's Hotel, anyone?

Infinity is a defined result of an arithmetic function? No, when infinity is the result of an arithmetic function, we usually call that result "undefined" in the set of real numbers. It makes no more sense to multiply an infinitely large number by 10 than it does to subtract "infinity" from "infinity."

AS
 
AS - We all agree that .999...=1 (except maybe Soapy Sam, who apparently has trouble dealing with the fact that even though mathematicians do not claim that "infinity" is a number, it still fits perfectly well into a logical framework, and wow I just tried to lead my own post off topic). We all agree that with some added justification of assumptions, your algebra will constitute a rigorous proof. We all agree that 7th graders need not justify these assumptions.

What I did is show that the assumptions *need to be justified* or else nonsensical proofs can be generated. (And don't say things like "you just added 9" because hey, what do you think turning .999... into 9.999... is doing? :)) Your reasoning is elegant. It is simple. It is not wrong. It quickly shows that .999... = 1 to those who are used to providing justifications with little thought for the assumptions you made, or to those who do not understand the need for justifications.

Unfortunately there are many, many people who do not fall into either of those categories.

Excellently put! :)
 
0.5

If you do the math, you will find that the result you would arrive at from 0.999... is 0,4999...

And of course, if 0.999.... = 1, then 0.09999999 =0.1
Then you just need to add 0.4 regardless of how you have been writing it.

That's right.

AS
 

Back
Top Bottom