Cont: Deeper than primes - Continuation 2

Status
Not open for further replies.
Then again, limit proofs are more difficult to grasp for most, so Dessi's approach has the advantage.
Limits are definitely not difficult to grasp if one observe the real-line by distinguish between one level of cardinality observation of the real-line, and more than one level of cardinality observation of the real-line.

By doing that, for example, 0.999...10 = 1 from only |N| observation.

But 0.999...10 < 1 by observing |N| from |P(N)|, exactly as done in http://www.internationalskeptics.com/forums/showpost.php?p=10328657&postcount=73.
 
Last edited:
Dear Dessi,

Please note that writing/reading http://www.internationalskeptics.com/forums/showpost.php?p=10080896&postcount=16 step by step does not eliminate the ability to get all what is written in one step, and this is exactly what happens if at least visual_spatial AND verbal_symbolic brain skills are used.

Things become important when we deal with infinite collections, for example:

2 -> 1
4 -> 2
6 -> 3
8 -> 4
...

We do not need to write down all even and natural numbers or to think step-by-step, in order to know in one step that there is a bijection from all even numbers to all natural numbers (we are using parallel thinking in order to conclude that both collections have cardinality |N|).

Please do not forget http://www.internationalskeptics.com/forums/showpost.php?p=10327229&postcount=68, and your reply to it in http://www.internationalskeptics.com/forums/showpost.php?p=10327276&postcount=69.

Thank you.
 
Last edited:
My theorem in http://www.internationalskeptics.com/forums/showpost.php?p=10328657&postcount=73 is very simple.

Exactly as some finite series that is observed from a sequence with |N| values < some given limit value, so is the case about some countably infinite series that is observed from a sequence with |P(N)| values, etc. ad infinitum ... where my conclusion is that no collection is accessible to the non-composed 1-dimensional space (which is not necessarily a metric space) or in other words, it is the inaccessible limit of all possible collections.

Moreover this "points"\non-composed "line" association is without loss of generality (can be expanded to any association among collection of lower spaces with a given higher space).
 
Last edited:
I didn't say it did.
So please tell me what is the purpose of your question about CH?

(It has to be stressed that any number system that its members are explicitly defined and symbolized, has at most |N| members).
 
Last edited:
By using parallel thinking style, you are able to know in one step that the green and purple prats in the diagram are (at least) two different levels.

By using 2.222...3 - 0.222...3 we actually get rid of all levels below the floating point (marked by purple rectangle), and from now on we work only with the values at the level above the floating point and get the result 2X/2 = 1 that has nothing to do with with 0.222...3, because we already got rid of 0.222...3 by using 2.222...3 - 0.222...3, and from now on it is not used anymore as a factor of our conclusions.
In other words, you reject algebraic equivalence in principle. This equation

a = b

cannot be equivalent to any of the following in your system:

a/k = b/k, for all k
ka = kb, for all k
a + k = b + k, for all k
a^k = b^k, for all k
f(a) = f(b), for any f

because as soon as we perform any algebra, we "are no longer using our original equation as a factor in our result." and we are confusing ourselves with "symbolic illusions".

I understand now why you why proofs of 0.999... = 1 are senseless to you, because you reject algebraic equivalence in principle and forms of mathematics that depend on algebra.

Regrettably, the statement 0.999... < 1 is nonsense too, because the '<' operator has a recursive, but well-defined algebraic definition based on algebraic equivalence. In your own system, the statement 0.999... < 1 must be a symbolic illusion for the exact same reason as 0.999... = 1. I would be surprised if you could write a proof for the "obvious" statement 1 < 2, or if you even knew what a proof of that statement looks like.

(If requested, I will write a formal proof for 1 < 2, then I will explain why it does not fit into your mathematical system.)

I understand why you introduce new forms of mathematics: because you need some way to express algebraically equivalent representations of a = b and a < b from first principles, but not principles involving algebra.

It makes sense that you have introduced an undefined process for "parallel-summations", so that you can compute expressions in a single operation without a process for computing the calculation, because that would require "steps" and therefore algebra. "parallel-summation" abstracts away the algebraic "steps" which normally build up numbers in piecemeal, allowing computations as single, atomic operation.

(I would argue that you've only hidden the piecemeal steps, you've not removed them; even identities like Σ(a = 0, a -> n) a = n(n + 1)/2, which appears to remove almost all of the intermediate additions in Σ are derived from those intermediate additions. I can provide a proof of this identity and an explanation of the reasoning behind it on request. )

Correct me if I've misunderstood.

Once again, parallel-summation is not some particular operator, it is simply using parallel approach in order to get a result in one step by not being influenced by the possibly complex structure of mathematical operations and their related variables\constant values, upon finitely or infinitely many arranged levels.
Here's what I understand so far: your proof appeals to an abstract form of algebra that you can't define, using processes that you can't describe, applying some kind of method that you can't analyze, to get a result that you can't explain.

Things you call "proofs" rely on undefinable mathematical properties, meaning they can't be analyzed for correctness. But, as long as you accept undefined operations in proofs, I'll submit this amusing "un-proof":

1.999... - 0.999... = 1
(1 + 0.999...) - 0.999... = 1
1 + 0.999... - 0.999... = 1
1 + (0.999...)(1 - 1) = 1
0.999...(1 - 1) = 1 - 1
0.999... = (1 - 1) / (1 - 1)
0.999... = 1

I personally think you've been very polite person throughout our conversation, and I am very supportive of your interest in learning mathematics, and I also support your attempt to discover entirely new branches of mathematics. As a math geek, I support other math geeks. Unfortunately, you lack precision of language needed to communicate your thoughts to people who are knowledgeable in mathematics.
 
Last edited:
In other words, you reject algebraic equivalence in principle. This equation

a = b

cannot be equivalent to any of the following in your system:

a/k = b/k, for all k
ka = kb, for all k
a + k = b + k, for all k
a^k = b^k, for all k
f(a) = f(b), for any f
Dear Dessi,

I certainly do not reject algebraic equivalence, I simply do not ignore the different levels of values that are used in some algebra, for example:

X = 0.999...

10X = 9.999...

10X - X = 9.999... - 0.999 = 9 (this is the critical operation, where we get rid of X (which is some value at the level of fractions) and what is left is the (positive, in this case) value at the level of whole numbers.

From now on X can't be but some positive whole number, which has nothing to do with the original value of X, as was given by X = 0.999...

It is well known that given initial values can't be changed in the middle of some argument if we wish to provide some valid conclusion about the given initial values.

You have missed the critical operation exactly because you are using only serial observation (step-by-step thinking style) of the considered framework.

Actually there is no problem to use serial_only observation of some finite framework in case of summation, but this is not the case if we deal with infinite summation in terms of process, because from this point of view the process can't be stopped and no exact result can be provided.

So, in this case we are using the brilliant notion of Cantor's transfinite cardinality that is definitely based also on parallel thinking, as explained in http://www.internationalskeptics.com/forums/showpost.php?p=10330277&postcount=102.

The use of finite cardinality, countably infinite or uncountably infinite cardinality is essential to my theorem in http://www.internationalskeptics.com/forums/showpost.php?p=10328657&postcount=73, where parallel thinking can't be avoided if we don't wish to find our framework stuck in some endless process, when we deal with countably infinite or uncountably infinite cardinality.

Moreover, the whole idea of, for example, the accurate value of |N| is possible only if we transcend some endless process, and this is done exactly by using parallel thinking that captures a given collection by using one step.

Also please be aware that (0.999...10 = 1) OR (0.999...10 < 1 by 0.000...110), or in other words, there is no contradiction because they are solutions in two different levels of the real-line.

-------------------

Please do not forget http://www.internationalskeptics.com/forums/showpost.php?p=10327229&postcount=68, and your reply to it in http://www.internationalskeptics.com/forums/showpost.php?p=10327276&postcount=69.

It is crucial for valid communication between us.

Thank you.
 
Last edited:
Let's improve what was written in http://www.internationalskeptics.com/forums/showpost.php?p=10330583&postcount=106, as follows:

My theorem in http://www.internationalskeptics.com/forums/showpost.php?p=10328657&postcount=73 is very simple.

Exactly as some finite series that is observed from a convergent sequence with |N| values < some given limit value, so is the case about some countably infinite series that is observed from a convergent sequence with |P(N)| values, etc. ad infinitum ... where my conclusion is that no collection is accessible to the non-composed 1-dimensional space (which is not necessarily a metric space) or in other words, it is the inaccessible limit of all possible collections.

Moreover this "points"\non-composed "line" association is without loss of generality (can be expanded to any association among collection of lower spaces with a given higher space).

(B.T.W convergent sequences with transfinite cardinality |N| < |P(N)| < |P(P(N))| < |P(P(P(N)))| < |P(P(P(P(N))))| < ... etc. ad infinitum ... are simply proper subsets of the sets that have these transfinite cardinality, for example:

The values of <0.9, 0.09, 0.009,...> are the same values of {0.9, 0.09, 0.009,...}, which is a proper subset of Q, where |{0.9, 0.09, 0.009,...}| = |Q| = |N|, and the same principle holds among collections with greater cardinality (but in this case we can't explicitly symbolize their values, since they are uncountable)).
 
Last edited:
Dear Dessi,

I certainly do not reject algebraic equivalence, I simply do not ignore the different levels of values that are used in some algebra, for example:

X = 0.999...

10X = 9.999...

10X - X = 9.999... - 0.999 = 9 (this is the critical operation, where we get rid of X (which is some value at the level of fractions) and what is left is the (positive, in this case) value at the level of whole numbers).

From now on X can't be but some positive whole number, which has nothing to do with the original value of X, as was given by X = 0.999...
Yes, it has everything to do with the original value of X = 0.999... . The value of X is constant in your equation; 10X is the constant 9.999..., 10X - X is also constant, and our original X = 9/9 = 1 is also constant.

Every value in the equation is a constant. So why does X = 0.999... become X = 1? Because 0.999... is an algebraically identical representation of 1.

Try plugging a few different start values of X into your equation, and perform the same operations.

Code:
X = 0.999...                        X = 1               
10X = 9.999...                      10X = 10            
10X - X = 9.999... - 0.999...       10X - X = 10 - 1    
9X = 9                              9X = 9              
X = 1                               X = 1               

X = √2/2                            X = 12
10X = 10√2/2                        10X = 120
10X - X = 10√2/2 - √2/2             10X - X = 120 - 12
9X = 9√2/2                          9X = 108
X = √2/2                            X = 12    

X = 0.11111...                      X = 0.12345 12345 12345...
10X = 1.1111...                     10X = 1.2345 123451 23451...
10X - X = 1.11111... - 0.11111...   10X - X = 1.23451 23451 23451... - 0.12345 12345 12345...
9X = 1                              9X = 1.11106 11106 11106...
X = 1/9                             X = 0.12345 12345 12345...


X = 0.99999999 (finite)             X = n
10X = 9.9999999                     10X = 10n
10X - X = 9.9999999 - 0.99999999    10X - X = 10n = n
9X = 8.9999999                      9X = 9n
X = 0.99999999                      X = n

Every X = (10X - X)/9 = X. It starts and ends with the same value. Why wouldn't it? Why is X = 0.999... the singular exception?

As near as I can tell, you object to the X = 0.999... case because 9.999... - 0.9999... removes all of the trailing digits. Indeed, it does, 10X - X = 9, it has no other value. I can prove the algebra works out by showing that if X = 0.999..., then 10X - X = 9, therefore 9X = 9*(0.999...) = 9:

9 * .999...
= 8.999...
= Σ(n = 0, n -> ∞) (81/10)(1/10n)
= (81/10) / (1 - 1/10) [see this identity]
= (81/10) / (9/10)
= 9

The math works beautifully. From this, one can conclude that 0.999... is an equivalent representation of 1, for the same reason that 0.1111.... is an equivalent representation of 1/9.

Actually there is no problem to use serial_only observation of some finite framework in case of summation, but this is not the case if we deal with infinite summation in terms of process, because from this point of view the process can't be stopped and no exact result can be provided.
This statement is incorrect. The example above, showing 9*(0.999...) = 9, demonstrates one way of the many ways people can analyze infinite series. Let me explain:

Let's say we have a sequence {a1, a2, a2, a2, . . . }, the nth partial sum Sn is the sum of the first n terms of the sequence:

Sn = Σ(k = 1, k -> n) ak

This series converges if the sequence of its partial sums, { S1, S2, S3, . . . } converges. In other words, the series converges if there exists an L such that for any arbitrarily small positive number x > 0, there exists a large N so that for all n >= N,

| Sn - L | <= x

where |expr| is the absolute value function. If a series converges (and there are many tests for convergence), then as x -> 0, Sn - L -> 0 and Sn -> L. A formal proof of this property closely resembles this explanation.

I know it doesn't seem "intuitive" that an infinite series converges to anything, but that intuition is wrong. Your insistence that we need a different way of analyzing infinite series isn't based on anything, and in fact many ot of the same techniques for analyzing finite series hold for infinite series. I don't think you can articulate any counterargument to this point.

So, in this case we are using the brilliant notion of Cantor's transfinite cardinality that is definitely based also on parallel thinking, as explained in http://www.internationalskeptics.com/forums/showpost.php?p=10330277&postcount=102.

The use of finite cardinality, countably infinite or uncountably infinite cardinality is essential to my theorem in http://www.internationalskeptics.com/forums/showpost.php?p=10328657&postcount=73, where parallel thinking can't be avoided if we don't wish to find our framework stack in some endless process, if we deal with countably infinite or uncountably infinite cardinality.

Moreover, the whole idea of, for example, the accurate value of |N| is possible only if we transcend some endless process, and this is done exactly by using parallel thinking that captures a given collection by using one step.
Doron, again, I view you as a very polite person with a serious passion for mathematics, but whatever you mean by "parallel thinking" is undefined, whatever operation you describe to compute an entire series in a single step without intermediate calculations is undefined. You aren't able to explain your to undefined concepts to anyone, so your proofs can't be analyzed for correctness. As near as I can tell, you don't have a proof nor anything novel to say about cardinality.

Since you mentioned Cantor, I encourage you to study the Cantor set and its analysis, as it has some fascinating and unexpected properties. The Wiki article on the Cantor set is actually very accessible to readers like you who have at least some familiarity with series, sets, and numbers in other bases.

Lastly, I note that you must be a computer programmer or have some programming background from the statement "we find our framework stack in some endless process". Thinking in terms of call stacks, frameworks, and serial processes immediately tells me that you imagine operations on infinite series as a computer program that can never halt. I infer that "parallel thinking" is a very informal description of a hypothetical Turing machine which, by some miracle, instantaneously reads an infinite tape of inputs and halts with an output. This process happens inside a "black box"; we don't know how the black box works, we just know that it does. Let's call this model a Super Duper Turing Machine, its what you call "parallel-computation". While the expression (2/1 + 2/3 + 2/9 + 2/27 + ...) - (2/3 + 2/9 + 2/27 + ....) will never halt on an ordinary Turing machine, the Super Duper Turing Machine, by some black box miracle, always halts with the answer 2. There are some inputs which even our powerful Super Duper Turing Machine cannot compute, jsfisher provided one such example, Busy Beaver numbers are another example.

An interesting question would be whether we can determine the halting behavior and output of our Super Duper Turing Machine, if such a machine existed? Yes, in fact we can; the entire study of calculus, infinite series, and differential geometry does exactly that. Conventional analysis techniques like the ones in this thread are Turing equivalent to computations on a Super Duper Turing Machine, both give the same answers to the same questions, there's no reason to think we would get a different result.
 
Last edited:
"What we have here, is a failure to communicate..."

What Doron tries to explain, he has explained thusly in the previous years, is that his direct perception tells him (he intuits as opposed to calculates) that if there were a method, which he does not know how to formulate *yet*, to view the reality of sets and/or infinity as he sees them then he were able to show that traditional math is lacking when held up to the light of (his?) reality.

What keeps happening in all his communications, however, is that well-meaning mathematicians keep pointing out that traditional mathematics is correct.

Doron keeps trying to persuade everyone to stop thinking inside the box (he overused the box smiley in all these years...) and just give him the benefit of the doubt.

What he would need is someone extremely proficient in math that is willing to *explore* ways to get to the result he wants.

What happens now is that he tries a path and we all say 'nah, wrong path, try again'. So he is basically trying to get to his desired result by brute force trial-on-error.

Dessi, are you able to (I am not, I admit) disprove why his wished-for result can never be achieved?

I mean, we *all* can find either mathematical errors or logical errors in one way or another for every path he chooses. Maybe because there never can be a path to his idea, but maybe, just maybe, because we are all too busy defending traditional mathematics.

Doron, you would have to admit that the above is correct (and you know it is, we have gone through too many discussions in more than one place and in more than just a few years).

I know you are looking for corroboration that you did have a profound insight, so why not ask for it without playing the pedantic teacher?

EDIT: Doron, I know you read my posts, so just read this one through; I am being friendly and am throwing you a bone here.
 
Last edited:
Every value in the equation is a constant. So why does X = 0.999... become X = 1? Because 0.999... is an algebraically identical representation of 1.
No Dear Dessi,

The initial X = 0.999... is omitted form 9.999... , and what is left is 9 that is a whole number that has nothing to do with fractions, and so is the case of 9X/9 = 9/9 = 1 that is also a whole number that has nothing to do with fractions.

The rest of your post is based on your indistinguishably between fractions and whole numbers.

Moreover, you are still missing the fact that, for example, |N| is undefined if we get the natural numbers only in terms of endless process.
 
Last edited:
Dessi, are you able to (I am not, I admit) disprove why his wished-for result can never be achieved?
I don't know what Doron's desired objective is, but in our exchanges I've learned a lot about his thought process.

I notice there is an internal logic to his entire reasoning process: he has a fascination with concepts of infinities and infinitesimals, but he struggles with them conceptually. He only thinks of mathematical expressions in terms of a computation that runs in a computer program; this model model understandably breaks down on expressions involving infinite and infinitessimal quantities because a summation algorithm can never, in principle, halt on an endless stream of summands.

Thinking like a computer programmer, Doron wonders how one would one go about computing an endless stream of inputs in a computer program? The answer is so obvious, so intuitive: by introducing a hypothetical computer that, by some miracle, sums the whole stream at once, processing every input in parallel -- parallel-summation! Amdahl's law be damned!

Let's call our hypothetical processor a Doron Machine ^_^

Doron intuits that his Doron Machine will give different answers to mathematically expressions than we normally get with analysis techniques, for reasons related to limitations in floating point representations of numbers, meaning some real numbers have no fixed binary representation inside a Doron Machine, only infinitely precise approximations.

I personally think infinite precision is pretty good, but can we do better? Yes we can. I'm an software engineer too, and today has been a really slow day in the office. So, I decided to build a better, less buggy parallel-summation machine:

The Doron Machine 2.0 Super Deluxe Ultra.

It's an extension of a normal Doron Machine with a better implementation of numbers which, by some miracle, doesn't store approximate quantities but rather stores quantities exactly. It also makes lattes and has a decent text editor. But the important thing is that any two quantities that are algebraically equal on paper really are equal in the Super Deluxe Ultra, and vice versa.

This works nicely, because any sort of infinite and infinitesimal numerical analysis we can compute on paper, we can compute in the Super Deluxe Ultra, and vice versa. We infer that numerical analysis on paper, while a little slower, is computationally equivalent to any computable expression on the Super Deluxe Ultra.

Even in Doron's very limited model of mathematical computation, the Doron Machine, the equivalence of 0.999... = 1 is inescapable. Intuition be damned.
 
Last edited:
No Dear Dessi,

The initial X = 0.999... is omitted form 9.999... , and what is left is 9 that is a whole number that has nothing to do with fractions, and so is the case of 9X/9 = 9/9 = 1 that is also a whole number that has nothing to do with fractions.
It might seem unintuitive at first glance, but the expression 0.999... does not have a fractional part.

The infinite series representation, Σ(n = 0, n -> ∞) (9/10)(1/10n), converges to a whole number. A proof of this series convergence is given here.

The rest of your post is based on your indistinguishably between fractions and whole numbers.
Rational numbers are any a/b, where a and b are natural numbers; whole numbers are rational numbers for any a = b or a modulo b = 0. There is no meaningful distinction between whole rationals and fractional rationals, they are treated in the exact same way.
 
Last edited:
While the expression (2/1 + 2/3 + 2/9 + 2/27 + ...) - (2/3 + 2/9 + 2/27 + ....) will never halt on an ordinary Turing machine, the Super Duper Turing Machine, by some black box miracle, always halts with the answer 2.
Also in this case dear Dessi, you simply eliminate infinitely many values < 1 AND > 0 by using one step with cardinality |N|, and what is left is some whole number.
 
It might seem unintuitive at first glance, but the expression 0.999... does not have a fractional part.
Actually 0.999...10 is a factional number, and this simple fact is not involved with any intuition's problem.
 
Last edited:
Rational numbers are any a/b, where a and b are whole numbers; whole numbers are rational numbers for any a = b. There is no meaningful distinction between whole rationals and fractional rationals, they are treated in the exact same way.

Shouldn't that be "any a = xb, x being an integer and non-zero"?

EDIT: I don't know if 0 is considered a whole number and am too lazy too look it up at this moment.
 
There is no meaningful distinction between whole rationals and fractional rationals, they are treated in the exact same way.

Wrong dear Dessi, you will never find a whole rational that is < 1 AND > 0 (which is the "home" of fractional rationals).
 
Status
Not open for further replies.

Back
Top Bottom