Deeper than primes

Status
Not open for further replies.
Ok, but the bits in a quantum computer for example are always on I think (I could be wrong, I don't know how a quantum computer works). And then empty information would not be a valid state. EDIT: At least not a state where one bit is off and another bit is on at the same time.
We are talking about the generalization of the concept of Information. No information (silence) is one of the options.
 
Once again your inability to distinguish between

(
1(),0()
)

and

(
1(0())
)

shows its trivial reasoning.

Once again your inability to distinguish your own fantasies from mathematics fails to make a space independent of its sub-spaces. Additionally changing the ordering of your parenthesis and the absence or presence of a comma in your above notations fails to make a space independent of its sub-spaces as well.



5) Also there is here a professional physicist, called The Man, so also please ask him about he result of 1/2+1/4+1/8+... according to Traditional Math, and please ask him to provide the rigorous proof that stands at the basis of his answer.

Sorry Doron, I have certainly never claimed myself to be a “professional physicist”. However if by “professional physicist” you mean ‘mechanical engineer‘, then yes I have spent the majority of my career as a mechanical engineer.
 
We are talking about the generalization of the concept of Information. No information (silence) is one of the options.


I should point out, Anders Lindman, that Doron’s concept of “generalization” is simply to expound whatever self-contradictory nonsense that comes into his mind. Apparently, primarily stemming from some thoughts he had during some transcendental meditation (what he refers to as “direct perception“), which simply means that he wasn’t even doing that right.
 
I should point out, Anders Lindman, that Doron’s concept of “generalization” is simply to expound whatever self-contradictory nonsense that comes into his mind. Apparently, primarily stemming from some thoughts he had during some transcendental meditation (what he refers to as “direct perception“), which simply means that he wasn’t even doing that right.

But I think there is something to primes being less redundant than other numbers. And I like the idea of a line being a non-local object while a point is a local object, plus the concept of looking at both uncertainty and redundancy at the same time. Very interesting stuff, although I don't understand all of it fully.
 
In a two bit system with uncertainty each bit can either have the value 0 (A) or 1 (B) or an uncertain value of a superposition of 0 and 1 (AB).

The possible combinations for the uncertain two-bit system are:

(AB,AB)
(AB,A)
(AB,B)
(A,AB)
(B,AB)
(A,A)
(A,B)
(B,A)
(B,B)

The redundancy for each combination is:

(AB,AB) - 0 or 1
(AB,A) - 0 or 1
(AB,B) - 0 or 1
(A,AB) - 0 or 1
(B,AB) - 0 or 1
(A,A) - 1
(A,B) - 0
(B,A) - 0
(B,B) - 1

The uncertainty for each combination is:

(AB,AB) - 2
(AB,A) - 1
(AB,B) - 1
(A,AB) - 1
(B,AB) - 1
(A,A) - 0
(A,B) - 0
(B,A) - 0
(B,B) - 0

If we denote a redundancy (R) of 0 or 1 as (0 + 1)/2 = 0.5 and the combination of uncertainty and redundancy (U, R) as U + 2R, then the resulting values are:

(AB,AB) - (2, 0.5) = 3
(AB,A) - (1, 0.5) = 2
(AB,B) - (1, 0.5) = 2
(A,AB) - (1, 0.5) = 2
(B,AB) - (1, 0.5) = 2
(A,A) - (0, 1) = 2
(A,B) - (0, 0) = 0
(B,A) - (0, 0) = 0
(B,B) - (0, 1) = 2

Let the complexity for a combination k be: max(Un + 2Rn) - (Uk + 2Rk). Then the complexity for the combinations is:

(AB,AB) - 0
(AB,A) - 1
(AB,B) - 1
(A,AB) - 1
(B,AB) - 1
(A,A) - 1
(A,B) - 3
(B,A) - 3
(B,B) - 1
 
epix, you can add Dr. Gérard P. Michon ( http://www.numericana.com/ , http://www.numericana.com/answer/ ) to your "mental cases" list.
This is really a "gem" -- one doesn't know where to start first . . .

So Dr. Michon reaches into antiquity and compares the result with one of Zeno's paradoxes. His comparative demonstration is very compelling, coz everyone knows that any distance from A to B can be traveled, and arrows leaving a bow are not excluded.

So the arrow has a trip to make; it will negotiate the distance of 1 plethron, which is an ancient Greek unit of length equaling about 100 feet. According to the partial summation formula that Dr. Michon included
1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + 1/128 + 1/256 + ...

This sum is equal to 1-2^-n when carried out only to its n-th term.
the arrow will pass 1/2 of the distance, then 3/4, 7/8, 15/16, 31/32, and so on as the formula says. Obviously, even if n becomes very large as it approaches infinity the following inequality will hold:

1 - 2-n < 1

In other words, according to the formula, the arrow never reaches the intended target 1 plethron away and, consequently, we never make it to work and no race is ever finished!

But we know better than that -- we know that the arrow hits the target after an adjustment is made by Dr. Michon:
This sum is equal to 1-2^-n when carried out only to its n-th term. It's simply equal to 1 if all of the infinitely many terms are added up.
Sorry, Doron, but you are not the only one experiencing peculiar ideas. Right above, Dr. Michon says that if ALL(!) of the INFINITELY MANY TERMS(!) are added up then

1 - 2-n = 1

One might suspect that Dr. Michon is joking when he associates the quantifier with definitive feel ALL with INFINITY. But, Dr. Michon isn't joking -- he is dead serious.


Obviously, an adjustment needs to be done to the formula to accommodate the idea that all of the infinitely many terms have been added up. Dr. Michon never included the adjusted formula, but there is only one that would accommodate the identity:

1 - 2-∞ = 1

Here we go. By changing the exponent 'n' so that the change would reflect upon the infinite case, Dr. Michon proved that the sum of the infinite sequence equals 1. The arrow has finished its 1 plethron-long trip and happily sticks out of Dr. Michon's ass.

There is a condition though for the identity to work properly:

IF 1 - 2-∞ = 1 THEN 2-∞ = 0

However, the conclusion that 2 to the power of negative infinity equals zero may wake up an emoticon such as this: :confused:

Why does any number to the power of negative infinity equal zero?
Here is the answer:
The question is misleading. Infinity and negative infinity are not numbers, so you cannot raise them to powers, nor can you raise numbers to them as powers.
http://wiki.answers.com/Q/Why_does_any_number_to_the_power_of_negative_infinity_equal_zero

Well, there seems to be another disagreement in the word of traditional mathematics. Such disagreements wouldn't exist if it were not for pseudo-mathematicians, such as Dr. Michon, who traveled all the way to antiquity to fetch a wrong but persuasive comparative example.

And now it's time to solve an expression that equals 1 as well:

1 = x/y

Well, there can't be a unique solution. The solution is a set {1/1, 2/2, 3/3, 4/4, 5/5, ...}. But it doesn't mean that all members of the set are equal to each other. Given the circumstances involving Dr. Michon's visions, the solution is

1 = 6/6

"Proof":

Let 1 stand for first. If the first book of the Bible is Genesis, then 6/6 is
The LORD regretted that he had made human beings on the earth, and his heart was deeply troubled.
Genesis 6:6


Is your heart still deeply troubled, Heavenly Father?

Leave me alone.

I can call Dr. Michon. He is a well-known cardiologist and...

FY
 
Anders Lindman,

Your are still count forms that are different by their order (for example: (A,B) , (B,A) ) , but order in terms of id location's changes have no significance at this fundamental level.

At this fundamental level all we care is about the "room" under a given frame, where the order is determined by increasing\decreasing "rooms" under the given frams, as follows:

Code:
(AB,AB) (AB,A)  (AB,B)  (AB)    (A,A)   (B,B)   (A,B)   (A)     (B)     ()

A * *   A * *   A * .   A * .   A * *   A . .   A * .   A * .   A . .   A . .
  | |     | |     | |     | |     | |     | |     | |     | |     | |     | |
B *_*   B *_.   B *_*   B *_.   B ._.   B *_*   B ._*   B ._.   B *_.   B ._.

(2,2) has a "room" for (AB,AB),(AB,A),(AB,B),(AB),(A,A),(B,B),(A,B),(A),(B),()
(2,1) has a "room" for (AB,A),(AB,B),(AB),(A,A),(B,B),(A,B),(A),(B),()
(2,0) has a "room" for (AB),(A),(B),()
(1,1) has a "room" for (A,A),(B,B),(A,B),()
(1,0) has a "room" for (A),(B),()
(0,0) has a "room" for ()
 
Last edited:
Anders Lindman,

Your are still count forms that are different by their order (for example: (A,B) , (B,A) ) , but order in terms of id location's changes have no significance at this fundamental level.

At this fundamental level all we care is about the "room" under a given frame, where the order is determined by increasing\decreasing "rooms" under the given frams, as follows:

Code:
(AB,AB) (AB,A)  (AB,B)  (AB)    (A,A)   (B,B)   (A,B)   (A)     (B)     ()

A * *   A * *   A * .   A * .   A * *   A . .   A * .   A * .   A . .   A . .
  | |     | |     | |     | |     | |     | |     | |     | |     | |     | |
B *_*   B *_.   B *_*   B *_.   B ._.   B *_*   B ._*   B ._.   B *_.   B ._.

(2,2) has a "room" for (AB,AB),(AB,A),(AB,B),(AB),(A,A),(B,B),(A,B),(A),(B),()
(2,1) has a "room" for (AB,A),(AB,B),(AB),(A,A),(B,B),(A,B),(A),(B),()
(2,0) has a "room" for (AB),(A),(B),()
(1,1) has a "room" for (A,A),(B,B),(A,B),()
(1,0) has a "room" for (A),(B),()
(0,0) has a "room" for ()

Yeah, I think you're right, but what will happen for larger bit systems?

For example, the binary strings 0000000011111111 and 1011001000110101 both have the same redundancy if positions are ignored, but different redundancy when positions are taken into account. And this becomes even more prominent for larger strings.
 
Yeah, I think you're right, but what will happen for larger bit systems?

For example, the binary strings 0000000011111111 and 1011001000110101 both have the same redundancy if positions are ignored, but different redundancy when positions are taken into account. And this becomes even more prominent for larger strings.

(A,A,A,A,A,A,A,A,B,B,B,B,B,B,B,B) and (B,A,B,B,A,A,B,A,A,A,B,B,A,B,A,B ) is the same form of 16-Uncertainty x 16-Redundancy tree, which is expressed under (1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1) frame.
 
(A,A,A,A,A,A,A,A,B,B,B,B,B,B,B,B) and (B,A,B,B,A,A,B,A,A,A,B,B,A,B,A,B ) is the same form of 16-Uncertainty x 16-Redundancy tree, which is expressed under (1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1) frame.

Yes, that's what I thought. For large binary strings that would be a problem when thinking of it in terms of zip compression for example. A string with a very uniform pattern can be compressed much more than a string of the same length that has a lot of complicated structure. So in a practical sense the first string in that case would have much more redundancy even if it contained the same number of As and Bs.
 
Yes, that's what I thought. For large binary strings that would be a problem when thinking of it in terms of zip compression for example. A string with a very uniform pattern can be compressed much more than a string of the same length that has a lot of complicated structure. So in a practical sense the first string in that case would have much more redundancy even if it contained the same number of As and Bs.
Anders Lindman,

First of all it is a good idea get the general notion of n-Uncertainty x n-Redundancy tree, before we are using it for some applications, where one of them is Data Compression.

The general notion of (1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1) frame can help us to use (A,A,A,A,A,A,A,A,B,B,B,B,B,B,B,B) and (B,A,B,B,A,A,B,A,A,A,B,B,A,B,A,B) in many other applications that compression is not their main role.

It terms Compression, please be aware of the fact about your tendency to compress (A,A,A,A,A,A,A,A,B,B,B,B,B,B,B,B) to (A,B) (which is under (1,1) frame).

Also in this case the general understanding of n-Uncertainty x n-Redundancy tree stands at basis of this compression.
 
Last edited:
Anders Lindman,

First of all it is a good idea get the general notion of n-Uncertainty x n-Redundancy tree, before we are using it for some applications, where one of them is Data Compression.

The general notion of (1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1) frame can help us to use (A,A,A,A,A,A,A,A,B,B,B,B,B,B,B,B) and (B,A,B,B,A,A,B,A,A,A,B,B,A,B,A,B) in many other applications that compression is not their main role.

It terms Compression, please be aware of the fact about your tendency to compress (A,A,A,A,A,A,A,A,B,B,B,B,B,B,B,B) to (A,B) (which is under (1,1) frame).

Also in this case the general understanding of n-Uncertainty x n-Redundancy tree stands at basis of this compression.

Data compression in general is probably a broader problem than can be covered with just one method. I looked up what is called Kolmogorov complexity. That seems to be a general definition useful for data compression. On the other hand, how the heck to calculate the Kolmogorov complexity for all kinds of information? :confused: Probably very tricky in practice.
 
Data compression in general is probably a broader problem than can be covered with just one method. I looked up what is called Kolmogorov complexity. That seems to be a general definition useful for data compression. On the other hand, how the heck to calculate the Kolmogorov complexity for all kinds of information? :confused: Probably very tricky in practice.
Redundancy and/or Uncertainty are still considered as a "white noise" that has to be reduced in order to discover the desired strict information, which is generally considered as the valuable data.

I claim that this is a trivial approach about Complexity, for example:

The rabbit that escapes from the fox, does its best in order produce an unpredictable path during its run, in order to stay alive as a complex system.

This unpredictable path is characterized by high degree of "white noise", which prevents from the fox the strict information that is needed, in order to hunt the rabbit.

This is just one example, which does not follow after Kolmogorov complexity, which is tuned to measure only strings of strict information by reducing the "white noise", as a part of the measurement.


Here is an example of how context is based on connection among text, which enables serial\parallel observation under a one form.

4618125873_6a57de20d4_o.jpg



By serial-only observation one can't get, for example, http://www.internationalskeptics.com/forums/showpost.php?p=6016109&postcount=10078.

Again,

Traditional Math does its job very well, by calculate the amount of a partial case of k-Uncertainty x K-Redundancy tree.

The main thing here is not the "how many?" question, but what actually enables the terms to ask that question?

Since Organic Numbers are a linkage between Non-local and Local qualities, it is the fundamental term that enables Quantity, where Quantity is the basis of the "how many?" question.

"How many?" question is usually based on distinction between different ids that are added to each other in order to define a sum, which is a certain size.

But Non-locality\Locality Linkage is not limited to distinct ids, and in this case the "How many?" question is extended beyond the different ids that are added to each other in order to define a sum.

By this extension the "How many?" question can't capture the complexity of the parallel/serial linkage of k-Uncertainty x k-Redundancy tree, where each part of it is both global AND local case of it, because of the qualitative principle that stands at the basis of Quantity.

k-Uncertainty x k- Redundancy are nothing but finite cases of a one and only one complex ∞-Uncertainty x ∞-Redundancy tree, yet they are based on the same principle of the ∞-Uncertainty x ∞-Redundancy tree, where this principle is the qualitative linkage between Non-locality and Locality.

The reasoning of the past 3,500 did not develop the understanding of the qualitative principle that stands at the basis of Quantity.

Organic Mathematics does exactly this, it discovers the qualitative foundations of Quantity, and step-by-step reasoning can't get that, because a step-by-step reasoning takes Quantity as a fundamental term for its development (by avoiding the understanding of its qualitative foundations) .

This is exactly the reason why Superposition is understood, for example as the sum over histories (http://en.wikipedia.org/wiki/Path_integral_formulation) of the paths of a quantum element from position A to position B, and by doing that it totally misses the qualitative linkage between Non-locality and Locality, that actually enables this sum, because a sum (which is caused by linear addition of each stimulus individually (see "serial observation" in http://www.internationalskeptics.com/forums/showpost.php?p=6175451&postcount=10845)) is nothing but some partial case of a framework that also deals with fogs (non-local numbers) and any possible mixture of sums/fogs.

This is also exactly the reason why infinite convergent elements are taken as sums and not as fogs, and this is how words like Superposition or Limit are used without any understanding (where the understanding here is exactly the qualitative foundations of Quantity).
 
Last edited:
Data compression in general is probably a broader problem than can be covered with just one method. I looked up what is called Kolmogorov complexity. That seems to be a general definition useful for data compression. On the other hand, how the heck to calculate the Kolmogorov complexity for all kinds of information? :confused: Probably very tricky in practice.
It all depends on the medium that the information goes through. The word "cat" has only 3 letters, but many more languages use more letters to form the translation of the word. If the medium are sets of binary strings, then you don't have to think that hard to compress a binary string of 32 characters, for example, such as 10110111010010001011101000011100. You just write a short and simple algorithm which interprets the string as a binary number and converts it into a hexadecimal number. In this particular case, the compressed info has only 8 characters: b748ba1c. The compression ratio is 4:1 but you can use other number bases, as long as the number of ASCII characters would permit. That seems to be a trivial case, but one letter, such as b is "spelled" by the computer machine language the way that it requires more than one bit of info.
 
Last edited:
But I think there is something to primes being less redundant than other numbers. And I like the idea of a line being a non-local object while a point is a local object, plus the concept of looking at both uncertainty and redundancy at the same time. Very interesting stuff, although I don't understand all of it fully.

Many of the concepts are (at least superficially) interesting. The problem is that Doron can't understand and does not accept that (1) his view doesn't preempt a more traditional mathematical perspective, and (2) to give his view some legitimacy of its own, he needs to develop it within some sort of consistent framework from base principles.

Instead, he merely asserts then contradicts himself then covers with gibberish, more contradictions, extols the trivial, claims that all the rest of Mathematics is in error, and that we are all too stupid to understand his nonsense.

Meanwhile, he claims 2 is not a member of a set with 2 as a member.
 
Status
Not open for further replies.

Back
Top Bottom