• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

I think I mentioned a while back that the only perfect model of a physical system was the system itself.
And by system we don't mean just the neural correlates. We are also talking about stimulus, chemicals, temperature, etc..

So not only the physical system but every variable, many that we might never be able to even conceive of. You can't step into the same river twice. No brain can ever have the exact same experience twice.
 
Although, thinking about it, if you were, in fact, the desk checked algorithm then what you are experiencing would be the result of the algorithm.
And each iteration of checking would result in a different experience. Roughly the same but not exactly identical.
 
Since an algorithm must be equivalent to a function on natural numbers then a genuinely random event could not be an algorithm, nor could a process involving non discrete values.

So you only have to think of an implementation of an algorithm on a physical system that involved randomness or non-discrete values then you would have a system capable of running an algorithm that did not itself behave algorithmically.

But you can always generate an algorithm after the fact. So just because there is randomness doesn't mean there isn't an algorithm -- it just means it would be impossible to determine which algorithm was being followed until after any relevant random events had occurred. But there would still be an algorithm that was followed.

And I am not sure the discrete thing is important.
 
Ah I see what you are saying.

Well let me ask you this -- suppose someone selects a number arbtrarily. Let us call this number a. Furthermore, suppose there is another number constrained for some arbitrary reason to be exactly 5 greater than a, call it b.

Now -- is it true, or arbitrary, that b - a == 5 ?

Necessity, reason, and principle say true.

Chance, whim, or impulse might say anything.
 
A process that involves non-discrete values cannot, by that definition, be an algorithm.

Well, the catch 22 is that you can discretize the value in question to an arbitrary level of precision.

Meaning, if you wanted to see how far off a system was from what the algorithm stated it should be, you couldn't, because if you can, then you can discretize further.
 
Necessity, reason, and principle say true.

Chance, whim, or impulse might say anything.

Well then you have answered your question.

The fact that we cannot tell if we are in a simulation or not has no more bearing on the consistency and truth values of our logical statements than the fact of a being arbitrary has on the relative difference between a and b.
 
I say this because to the extent that you can run the operations out of order it will only be during concurrent operations of the algorithm anyway. For serial operations the system has no choice but to pause and wait for required results before proceeding.
As I understand the concept of a desk check, I don't believe this reasoning applies. If we have gates A'1 and A'2 having inputs (11) and (01) respectively, that feed into gate A'3 in the next cycle, then A'3 is going to get inputs (01). We can calculate (01)->1 for A'2, then (01)->1 for A'3, then (11)->0 for A'1.

Suppose for some odd reason a mistake in the A' algorithm showed up, where the A'2 gate emitted 0, then it'd be a different thing we do is all--it may even change the order of our checks (in this case it would). We use N to compute A'3 first as (00)->1, and that would look right. Then A'2 as (01)->1, and that would look wrong, and then we'd be done with our desk check, and conclude that the A'2 gate's calculation messed up. This means that our A'3 calculation computed what it did, yet not what it was supposed to, but so what? We found the error--it failed the desk check. We go and fix A'2 and rerun it.

So no, we don't have to wait for A'2 to complete in order to run A'3 in a post hoc desk check. We simply assume the entire thing ran smoothly, and try to prove the assumption false.

If we were doing a test in the blind, however, we wouldn't have this information, so we have to run the whole thing in order. But we're not doing that, I don't think. I think all we're doing is figuring out if A' ran correctly, and I think we already know everything A' did. But the case to consider is the case in which it did happen to run correctly, and we're running N out of order.
So your example is a little misleading because there will be times when you simply can't run the calculations out of order without changing the results of the algorithm. Accepting this, it isn't as crazy as it sounded at first.
Don't you have the same problem computing entire time slices backwards? You have to know what the inputs were to slice 2000 in order to simulate that at all. That depends on what happened in 1999. I can't see a way to reasonably interpret doing a desk check backwards without having the same concerns about serial versus concurrent processing.
 
Last edited:
Well then you have answered your question.

The fact that we cannot tell if we are in a simulation or not has no more bearing on the consistency and truth values of our logical statements than the fact of a being arbitrary has on the relative difference between a and b.

This.
 
But no one claimed that cars couldn't float. The claim, however, is that all algorithms behave according to the mathematics of information processing.

So the question asks for someone to design something that is provably impossible to design.

The question asks for someone to design a system that is non-algorithmic.

An example of a non-algorithmic system has already been given - a Turing machine with a random number generator bolted on the side. There's been an objection that the coin-tosser I suggested is still deterministic. I don't necessarily accept that in this context, but it's possible to replace the coin-tosser with a device that uses quantum events such as radioactive decay.

Such a device can do things that the Turing machine can't. It can't carry out the computations that the Turing machine can carry out any better, but so what? If we restrict ourselves to discussing only things that we know that Turing machines can do, we'll come to the conclusion that nothing can do anything that a Turing machine can't do. It's not a productive way to proceed.

Since we've established that there are systems which have capabilities that Turing machines do not, then we have to consider whether the human mind is such a system, or if everything it does is capable of being performed by a Turing machine.
 
No brain can ever have the exact same experience twice.

That might be practically true, but I believe the argument is that the human brain could be simulated on a Turing machine and that this would produce identical experiences, in every way.

I don't think this is true, but I think that some people arguing here do believe that it's true. In fact, I think that Rocketdodger is arguing that it is a matter of provable mathematical fact that it's true.
 
The fact that we cannot tell if we are in a simulation or not has no more bearing on the consistency and truth values of our logical statements than the fact of a being arbitrary has on the relative difference between a and b.

In some ideal way perhaps.

But I think I prefer this... may I paraphrase you:

The fact that we cannot tell if we are experiencing a past life regression or not has no more bearing on the consistency and truth values of our logical statements than the fact of a being arbitrary has on the relative difference between a and b
 
drkitten said:
No more than the Theory of Evolution or the Theory of Relativity is "only a theory."
I don't think so.

The Church-Turing thesis is a proof that all definitions of information processing so far proposed are equivalent. At this point, the list of actual definitions is quite lengthy. We don't even have a well-formed definition of anything more powerful than a Turing machine -- except for the explicitly counterfactual notion of "oracle computing," which rather blatantly assumes magic.
But it is not a proof that all algorithms are Turing compatible. So the brain could employ an algorithm that is not Turing compatible.

I agree that this is a nit-pick, since surely the Church-Turing thesis is correct.

~~ Paul
 
Robin said:
Take a hammer and hit your thumb (not really of course).
Who says behavior has to be overt? Think of consciousness as internal behavior. What problems arise that would make you think it is something more?

Edited to add: As Darat said.

~~ Paul
 
Last edited:
But it is not a proof that all algorithms are Turing compatible.

Er,.... yes, it is. The definition of "algorithm" is "something that can be run on a Turing-machine." We've had that since 1930.

What we have also had since 1930 is the alternative notions of "something that can be computed in the lambda calculus," "something that can be computed via recursive functions," "something that can be computed by cellular automata," and other variant notions.

The Church-Turing thesis is simply a proof that these are all "algorithms."
 
Er,.... yes, it is. The definition of "algorithm" is "something that can be run on a Turing-machine." We've had that since 1930.

Hmm - if an "algorithm" was defined that required a machine that was more powerful than a Turing-machine (but it was theoretically something that could be built unlike other hyper-Turing-machines) then I suspect that the new system would become the de facto definition for algorithm. The basic reason being that the term existed long before the theory of computation.
 
westprog said:
An example of a non-algorithmic system has already been given - a Turing machine with a random number generator bolted on the side.
I don't think we understand the ramifications of this. At least I don't. Is this more powerful than a plain Turing machine? What about initializing the tape with an arbitrarily large number of random numbers?

In any event, this doesn't seem to have any ramifications on a physicalist model of the brain, since quantum noise is available to the brain if necessary.

~~ Paul
 
I don't think we understand the ramifications of this. At least I don't. Is this more powerful than a plain Turing machine?

Well, there's the Non-Deterministic Turning Machine, which would be a bit similar but instead of having a random number generator you can have state transitions with multiple possible outcomes that can be selected arbitrarily. As I recall such a machine would allow some problems to be computed with a lower order of complexity (and hence more quickly) but it wouldn't allow you to compute anything you couldn't already compute with a DTM. I don't think this machine would do so either since you'd pretty much only be able to use the RNG for a similar use (or for the randomness itself if it was required).
 
drkitten said:
Er,.... yes, it is. The definition of "algorithm" is "something that can be run on a Turing-machine." We've had that since 1930.

What we have also had since 1930 is the alternative notions of "something that can be computed in the lambda calculus," "something that can be computed via recursive functions," "something that can be computed by cellular automata," and other variant notions.

The Church-Turing thesis is simply a proof that these are all "algorithms."
Hmm. I don't think that's the definition of an algorithm. I think the definition involves steps in an imperative-style task and the claim that they can be simulated with a Turing-complete system is just an assertion.

Now, just to contradict myself, here is a possible proof:

http://research.microsoft.com/apps/pubs/default.aspx?id=70459

And here is a program for proving it:

http://www.math.ist.utl.pt/~ojakian/ojakianSlidesCT2008.pdf

~~ Paul
 

Back
Top Bottom