• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

Well, is a simulated decision a real decision?

Obviously, the water in Second Life won't water my real garden.

But if I "program" a "computer" in Second Life to play chess, based on a sufficiently detailed simulation of an an actual chessboard (involving simulated pieces, simulated moves, and what not, that simulation can play chess in the real world.

Yes, this is what I was saying.
 
Identical conscious states? I'm sorry but I don't remember this. Sincerely. I would have told you from the start that the answer was no. Why would that even be a point for discussion? Who cares? So long as the algorithm is conscious. You and I don't even have identical conscious states. Not even identical twins have identical conscious states.
But if the desk checked program was the moment you are experiencing right now, then clearly it would have identical conscious states to you.

Which is why I phrased it that way.
 
Do you really not recall me asking "Do you think it possible that this moment you are experiencing right now could be the result of a billion years of writing numbers on paper?"
Roughly yes. Exactly no. No sufficiently complex system can be modeled exactly. (see Chaos theory). I had no idea you were being so exacting in your language. But the puzzling things is that you don't need to appeal to intuition. Chaos theory rebuts it perfectly.
 
Last edited:
But if the desk checked program was the moment you are experiencing right now, then clearly it would have identical conscious states to you.
Your checking is itself a variable. The paper is a variable. The pencil is a variable. The temperature of the room is a variable. There are many variables that you can't model. There is no possible way for you to exactly model the moment I'm experiencing now (see chaos theory).
 
Last edited:
Well I understand that part, but there's something missing still in going down to N--either an explicit claim that N would produce my consciousness, or an explanation of something left out.

Assume that A is your simulation of a neuron's calculations--include whatever inputs happen to come in whenever/wherever they occur (just picking this for discussion--we could easily aggregate these... consider it a functional decomposition style analysis).

Now I build A', which is an equivalent simulation, using only clocked NAND's. So I want to use A' to define the "random" ordering, as follows. I have t NAND gates total. I will label all of them uniquely with integer numbers, from 1 to t. What is significant here is that I defined an ordering (let's further assume they have labeled inputs, and we always represent them in a certain order, so that we know 01 from 10, in case that matters).

During each clock cycle in A', p of my t NAND gates are computing 00, q are computing 01, r are computing 10, and s are computing 01. So let P1 through Pp be the NAND gates calculating 00, in order, Q1 through Qq be the ones calculating 01, in order, and so forth. For the next clock cycle, we do the same thing, but start with Pp+1, Qq+1, etc.

When we're done, I want to calculate the NAND gates in A', in this arbitrary order:
P1, Q1, R1, S1, P2, Q2, R2, S2, etc.

Once I hit the end of one of the P, Q, R, and S sequences, while I still have the other three to calculate, I'm just going to start running some other "program" in the background in a random order (effectively I want to just fill it in, so that I keep iterating).

Now I wind up with N, which is a program that's just calculating, in this order, (00)->1, (01)->1, (10)->1, (11)->0, and going back to start. So are you claiming that this program implements an out of order equivalent A? If we keep running it, will it produce a conscious mind--albeit an out of order one?

Alright, I have my response -- yes, it will produce a conscious mind.

I say this because to the extent that you can run the operations out of order it will only be during concurrent operations of the algorithm anyway. For serial operations the system has no choice but to pause and wait for required results before proceeding.

So your example is a little misleading because there will be times when you simply can't run the calculations out of order without changing the results of the algorithm. Accepting this, it isn't as crazy as it sounded at first.
 
Yes, this is what I was saying.
It is more or less the proposition that AkuManiMani suggested earlier models can do the action but don't duplicate the ontology.

A chess program is not strictly speaking a model anyway - there is no "thing" that it is modelling.

But I never doubted for a moment that the desk check could model the function, I even stipulated that it might understand.
 
I don't know. Though real water can't make virtual flowers grow, and vice-versa, are programs modeling consciousness conscious, by definition ?

I remember in our previous discussions westprog and I were going on about how we need a physical understanding of what consciousness is before we can reproduce it in technology. PixyMisa, et al., have been arguing from 3 assumptions: [1] consciousness is capable of being modeled by a Turing machine, [2] we already have a sufficient model, and [3] that such a model is an -actual- reproduction of consciousness. Essentially, the whole debate so far has been whether consciousness is a physics issue or simply an IT problem.

If consciousness is a form of energy, like electricity or matter, simply modeling functions correlated with it in a program will not be enough. One would have to recreate the actual physical process that generates consciousness.

I mean, virtual characters running in a simulation are running as far as the simulation is concerned. The difference is that conscious programs COULD and WOULD interact with reality.

My guess is if we did create conscious programs we would have to create the appropriate physical medium. After that, it would be a matter of having them interface with some purely virtual environment and/or an interface with the external environment we operate in. For entities like that, interfacing with the "external" world via some robotic system might be akin to us using an avatar in a VR interface :D
 
Last edited:
I make a calculation and write down the answer an paper.

Result? A number on the paper.

I make another calculation based on that and write it down.

Result? Another number on a piece of paper.

No matter how long you do this you will end up with numbers on paper.

No consciousness, no time dilation.

Just numbers on paper.

I mean, what is the mechanism being proposed here?

Well, keep in mind that entire worlds have been created that are nothing more than bits on silicon.

If you suggested such a thing to someone who hadn't seen it, they would say "but those are just bits on silicon..."

But then their jaw will drop when they see something like GTA4, which fits neatly on an Xbox360.
 
Roughly yes. Exactly no. No sufficiently complex system can be modeled exactly. (see Chaos theory). I had no idea you were being so exacting in your language. But the puzzling things is that you don't need to appeal to intuition. Chaos theory rebuts it perfectly.
I think I mentioned a while back that the only perfect model of a physical system was the system itself.

There are quite a few things that rebut it perfectly. It is simply not true to say that a system capable of running an algorithm is itself algorithmic, if you think hard you can devise counter examples. So there is no reason in the first place to think that there must be an algorithm that is equivalent to the human brain.

But the point being pressed was the equivalency. But without that then it might not be conscious at all, or it might be conscious in a way we wouldn't recognise.
 
Fair enough.

Now remember the original claim that a system capable of running an algorithm will behave as the MoIP says it will.

And the MoIP says that an algorithm will do what an algorithm will do.

So any system capable of running an algorithm should always do what an algorithm will do.

So the question is, can you design a system that is capable of running an algorithm, but which will do what an algorithm won't do?

And bear in mind that the Church-Turing thingy works both ways.

Well, an algorithm can't really do anything "physical." And if anyone says they can, what they mean is that a physical system behaving according to an algorithm can do something "physical."

So your question should be, can you design a physical system that is capable of behaving according to an algorithm yet can also behave in a way that isn't in accordance with any algorithm.

And I think the answer is "no," and I think that is what people like drkitten and pixy mean when they say "everything is an algorithm" or "everything is computable," etc. They mean that there is no behavior for which there is no possible algorithm. (please chime in if I am incorrect, drkitten or pixy).

Of course this doesn't take into account true randomness of QM and stuff like that, but I am not an expert in that area so I am not sure how it impacts the arguments.
 
Which, as I pointed out before, was none of my doing

Well I guess I made an incorrect assumption about your intent when making that post. I apologize.

But you are asking me to buy that mental arithmetic and numbers on paper could produce time dilation? How?

LOL, no no no. I am saying that time diliation would affect a process that produces consciousness.

I.E. if you were in a starship.

I only brought it up to illustrate that the idea of a microsecond of consciousness taking billions of years to occur isn't absurd at all, since relativity theory has been saying it for 50+ years.
 
Well, an algorithm can't really do anything "physical." And if anyone says they can, what they mean is that a physical system behaving according to an algorithm can do something "physical."

So your question should be, can you design a physical system that is capable of behaving according to an algorithm yet can also behave in a way that isn't in accordance with any algorithm.

And I think the answer is "no," and I think that is what people like drkitten and pixy mean when they say "everything is an algorithm" or "everything is computable," etc. They mean that there is no behavior for which there is no possible algorithm. (please chime in if I am incorrect, drkitten or pixy).

Of course this doesn't take into account true randomness of QM and stuff like that, but I am not an expert in that area so I am not sure how it impacts the arguments.
Since an algorithm must be equivalent to a function on natural numbers then a genuinely random event could not be an algorithm, nor could a process involving non discrete values.

So you only have to think of an implementation of an algorithm on a physical system that involved randomness or non-discrete values then you would have a system capable of running an algorithm that did not itself behave algorithmically.
 
Well I guess I made an incorrect assumption about your intent when making that post. I apologize.



LOL, no no no. I am saying that time diliation would affect a process that produces consciousness.

I.E. if you were in a starship.

I only brought it up to illustrate that the idea of a microsecond of consciousness taking billions of years to occur isn't absurd at all, since relativity theory has been saying it for 50+ years.
OK. But as I said I am ok with the billion year seeming like a half second in any case if the conscious state did in fact arise.
 
Your checking is itself a variable. The paper is a variable. The pencil is a variable. The temperature of the room is a variable. There are many variables that you can't model. There is no possible way for you to exactly model the moment I'm experiencing now (see chaos theory).
Although, thinking about it, if you were, in fact, the desk checked algorithm then what you are experiencing would be the result of the algorithm.
 
What is true can be determined by chance, whim, or impulse, and not by necessity, reason, or principle?

Arbitrary

Ah I see what you are saying.

Well let me ask you this -- suppose someone selects a number arbtrarily. Let us call this number a. Furthermore, suppose there is another number constrained for some arbitrary reason to be exactly 5 greater than a, call it b.

Now -- is it true, or arbitrary, that b - a == 5 ?
 

Back
Top Bottom