Simple mathematical problem (?)

Seems like this is simply a convoluted way of writing R x {0,1,2,3,4,5,6,7,8,9}, since, absent any algebraic structure, .999...1 would, in your formulation, function identically to (.999... , 1). One infinite list of digits "followed" by another infinite list of digits is the same as simply having an ordered pair, each element of which is one infinite list of digits.
I guess there's some similarity, but I was actually thinking of a more general extension, that of indexing a sequence of decimal digits with an arbitrary ordinal. For example, you could use the first uncountable ordinal ω1 to index a sequence of decimal digits. Something like that could not be mimicked in any nice way with an ordered pair (or ordered triple, or any ordered, finite, n-tuple).

I also disagree that your formulation follows naturally from the current definition of ".999...". It is a valid extension, true, but it requires stating the defintion of ".999..." in a particular way so that it can be extended. When you speak of one infinite list following another infinite list, "follow" cannot be anything but an abstract term, as the usual meaning does not allow for such a use.
That's fair enough; what seems natural to one person may not seem natural to another. The reason it strikes me as natural is that with this extension we preserve the well-orderedness of (the index of) the string of decimal digits. In particular, there's always a next digit (though not always an immediate predecessor, of course). But I do still claim it makes sense to claim one infinite list "follows" another infinite list--in exactly the same sense that this is exactly what happens in the class of ordinals:

0, 1, 2, 3,..., ω, ω+1, ω+2, ω+3,..., ω+ω, ω+ω+1,...

and so on.

Of course, I will say again I'm not really trying to create any new algebraic structure with sequences of digits like this (or anything of that sort, really). To be fair, my main point was really just to play devil's advocate for a bit and maintain that notation such as 0.000...1 does make sense (which I do still claim).
ω isn't there though. The number of digits (not necessarily the value of the whole string of them) is clearly a natural number, so any specific quantity of digits is < ω, and an infinite number of digits is mappable to ω. For higher ordinals of infinity to apply, you'd have to be talking about a set that cannot be mapped to the natural numbers, which is not the case here. No matter how many sets of natural numbers you combine, you've still got a set which is mappable to the natural numbers. You need something else entirely before you can talk about the next higher ordinal infinity.
To be honest, I'm not entirely sure what you mean here. I get the impression you may be thinking more in terms of cardinals than ordinals. There are a lot more countable ordinals (beyond the natural numbers) you have to go through before you actually get to the uncountable ordinals:

The ω is most certainly "there" (whatever "there" means), it's just the first transfinite ordinal (followed by ω+1, ω+2,... and so on like I just listed earlier).

And there are "plenty" of ordinals larger than ω which are all still countable (i.e., can be mapped one-to-one to the natural numbers):

ω+1, ω+2,..., ω+ω, ω+ω+1,..., ω+ω+ω, ω+ω+ω+1,..., ω2,...,ω3,...,ωω,...

and so on. These are all distinct, countable ordinals. In fact, there are quite literally too many countable ordinals to list in any conceivable sense--there are uncountably many countable ordinals (there are ω1=aleph1 many countable ordinals, to be exact).

But all of these countable, transfinite ordinals are suitable for indexing a list of decimal digits:

0.999... (a standard decimal representation of a real number)

is indexed by ω (a countable ordinal)

0.000...1

is indexed by ω+1 (a countable ordinal)

0.000...111...23

is indexed by ω+ω+2 (a countable ordinal)

Like I said, all of these are countable ordinals, falling "well short" of the first uncountable ordinal, ω1.

Of course, you could index a list of decimal digits with the smallest uncountable ordinal, ω1, as well (or any arbitrary uncountable ordinal). Of course, at this point it would be difficult to make sense of any sort of notation along the lines of 0.245...342324...4324...; ω1 is simply too big for that sort of notation to be clear and readable. However, you could still think of such a list as a function f:ω1 -> {0,1,2,3,4,5,6,7,8,9}, like I mentioned previously.
 
To be fair, my main point was really just to play devil's advocate for a bit and maintain that notation such as 0.000...1 does make sense (which I do still claim).
Saying that we can assign a meaning to it is different from saying that it has a meaning. We can assign a meaning to 2/0, but that doesn't mean that the notation 2/0 makes sense.
 
Saying that we can assign a meaning to it is different from saying that it has a meaning. We can assign a meaning to 2/0, but that doesn't mean that the notation 2/0 makes sense.
Yeah, I do agree with that. Of course in mathematics, things don't have a meaning until you actually do assign a meaning to them in the first place (except possibly at the foundational level of, for example, set theory, where sets don't necessarily have "meaning" outside the structure that the foundational axioms describe. I guess you could say the "meanings" assigned in mathematics beyond that are fundamentally in terms of the foundational set theory).

Anyway, I do agree there's no natural and unambiguous way of assigning a meaning to "2/0", where "/0" by common convention would mean multiplication by the mulitplicative inverse of 0 (of which there is none, of course).

But at the same time, I do still think that when you see a "hypothetical" decimal expansion (denoted as "0.000...1", for example,) it's very natural to interpret it as an indexing of the digits by an ordinal: You've got a sequence indexed by the first segment of the ordinals (i.e., the "0"'s are being indexed by the natural numbers), followed by a next digit (the "1", indexed by ω). This is exactly as the actual first segment of the ordinals (again, the natural numbers) is followed by the actual next ordinal (ω).

It seems a very natural interpretation to me, but I do appreciate the fact that different people have different appreciations of the meaning of the word "natural". This is what's natural to me.
 
Oh no....... you have posted to the Thread That Must Not Be Bumped...

:boxedin:

Everyone brace yourself for another 15 pages...

If a man showed you three doors, you picked one, and he opened another and it was empty, your odds of getting a car if you switch to the other one are p=M*C*W*Sn+(1-M)*Cnm*G', which turns out to be 0.999...

But that's a problem because 0.999... is the critical density. For p<0.999... the car is spherical, for p=0.999..., the car is flat, and for p>0.999... the car is turbocharged. Since each parameter in the above equation is determined completely arbitrarily, there is an error factor and we can say p=0.999...+-5%.

Thus since probabilities can't be greater than 1 and there aren't many numbers x with 0.999...<x<=1, most of the possible values for p fall within the spherical car range, and in fact it is infinitely more likely the car is a sphere than the car is turbocharged. But this means our C, Sn, and Cnm terms are probably slightly off anyway. So it's not really a well-posed problem in the first place.
 
Yes, that's much better. The proof in the opening post had a typo in it. I think an even simpler way to put it is:

X=0.999(rec)
10X=9.999(rec)
10X-X=9.999(rec)-0.999(rec)
9X=9
X=1

As far as I can tell, this proof is valid, and X is (rigorously) equal to 1.

This proof is Tautological. It is circular reasoning.
The step which causes it to be tautological is multiplying a recurring number by 10, simply by moving the decimal point to the right.

The difference between 0.999(rec) and 1 may be infinitesimal,
but also, the difference between 10 * 0.999(rec) and 9.999(rec) is also infinitesimal.
In normal maths, multiply by 10 simply by moving the decimal point to the right, but with recurring numbers, this causes am infinitesimal aproximation - For example -

0.999 (recurring to infinity) multiplied by 10 gives
9.999(recurring to one less than infinity!)

This is the inaccuracy which makes it appear to be a proof when it is not.

The problem as I see it, is following the apparent rules of maths, but losing sight of the logic. The lesson seems to be that there are different rules for recurring numbers - eg Multiplying by 10 simply by moving the decimal point to the right, gives an infinitesimal incaccuracy.
 
0.999 (recurring to infinity) multiplied by 10 gives
9.999(recurring to one less than infinity!)
There is no such thing as "to one less than infinity". This is a flawed concept of infinity you're using here. When would be the end of the infinite line where you'd chop off that last 9 from?
I prefer the fractions.

1/9 = .111...
2/9 = .222...
3/9 = 1/3 = .333...
4/9 = .444...
5/9 = .555...
6/9 = 2/3 = .666...
7/9 = .777...
8/9 = .888...
9/9 = 1

Is that last 1/9 past 8/9 infinitesimal close to the same size as the other 8, but just slightly bigger?
 
How about my proof, not far back?

Or, better yet, if .9999..... < 1, that means there is a positive distance between them on a continuum. What is .999999... plus one half that distance?
 
No, there are not different rules for recurring numbers. .333... (recurring) can be shown to be equal to 1/3 in exactly the same way that .999... recurring) can be shown to be equal to 1. Multiplying .333... (recurring) by 10 does not introduce any inaccuracy. It gives you 3 and 1/3, exactly, which is 3.333... (recurring).
 
No, there are not different rules for recurring numbers. .333... (recurring) can be shown to be equal to 1/3 in exactly the same way that .999... recurring) can be shown to be equal to 1. Multiplying .333... (recurring) by 10 does not introduce any inaccuracy. It gives you 3 and 1/3, exactly, which is 3.333... (recurring).

It is an inaccuracy of the decimal system in that the number cannot be wholly defined for a finite number of decimal points.
In practical terms, 0.999(rec) = 1, but, entering into the spirit of this debate, I was trying to illustrate why the 'proof' was logically flawed.
It is Impossible to prove logically that 0.999(rec) = 1, without a logical flaw which causes tautology.
It is more a philosophical question that you can argue forever, whether 0.999(rec) = 1. I was trying to illustrate that an analogous exactly similar and counterbalancing facet of the proof, was the assumption that 0.999(rec) * 10 = 9.999(rec).
Whereas the original question, although perhaps not 100% meaningful, is at least obvious to mathematicians, because it is able to be put into symbolic form. The difference I am suggesting which causes the tautology, is of a similar nature, but because it is not able to be put into a similarly clear mathematical form, perhaps cannot as obviously be seen, and there will always be those who argue, but this is the only reason why this supposed proof seems to work
 
Last edited:
It is Impossible to prove logically that 0.999(rec) = 1, without a logical flaw which causes tautology.
It is more a philosophical question that you can argue forever, whether 0.999(rec) = 1.

If by the first you mean that 0.999(rec)=1 by definition, then I agree.
The second, though? Nah. It is not philosophical. Either you are aware of how/whether you've defined the symbol 0.999(rec) or you aren't. And once you've realized this, the question of whether you "should" define 0.999(rec) as 1 or not becomes uninteresting.

Edit: MdC, you should probably check to make sure his IQ is under 100+2sigma or so first.
 
It is an inaccuracy of the decimal system in that the number cannot be wholly defined for a finite number of decimal points.
In practical terms, 0.999(rec) = 1, but, entering into the spirit of this debate, I was trying to illustrate why the 'proof' was logically flawed.
It is Impossible to prove logically that 0.999(rec) = 1, without a logical flaw which causes tautology.
It is more a philosophical question that you can argue forever, whether 0.999(rec) = 1. I was trying to illustrate that an analogous exactly similar and counterbalancing facet of the proof, was the assumption that 0.999(rec) * 10 = 9.999(rec).
Whereas the original question, although perhaps not 100% meaningful, is at least obvious to mathematicians, because it is able to be put into symbolic form. The difference I am suggesting which causes the tautology, is of a similar nature, but because it is not able to be put into a similarly clear mathematical form, perhaps cannot as obviously be seen, and there will always be those who argue, but this is the only reason why this supposed proof seems to work


The problem is that while you can have a set of number where 0.999... isn't 1 things like calcules stop working. That is why most people use the standard reals.
 
If 0.999... is not equal to one, what fraction expressed as a/b where both a and b are integers (and b is not zero), is it equal to?
Or is it not a rational number?

If it isn't a rational number, is 0.333... rational?
 

Back
Top Bottom