• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

You claim that an arbitrary "what is real/what isn't real (a simulation)" distinction to be the only "what is real/what isn't real" distinction that is a true (or non-arbitrary) "what is real/what isn't real" distinction.

I just don't understand how this can be true (or non-arbitrary) when the distinction between what is real and what isn't real, after all... is arbitrary.

Well that is your problem -- you think "true" and "non-arbitrary" are the same thing.

Why? Those terms don't have anything to do with each other...
 
Yeah, it's not very clearly defined.

Assuming that the neurons in your brain make a certain sequence of state transition in computing two seconds of your conscious mind, if we run the same set of transitions backwards what happens?

Well, we start with all your memories of those two seconds in place, and we end up with them all gone. At any point in the process, the paper-you remembers things in the original chronological order. Since the calculations and hence the results are the same but reversed, you have to be unaware of the external order of processing, the time taken, the mechanism involved, and so in.

So if you run the simulation backwards, the simulated-you will experience those two seconds forwards - as far as it is concerned. Forwards, backwards, or in compeltely random order, it doesn't matter, as long as you don't change the results.

I do not agree with this -- if the state transitions are determined by non-invertible functions then the sequence cannot be run backwards.

So the only way to do it would be to have the entire results of the simulation and then "play" it backwards. But in that case it is no longer information processing, it becomes merely information. The only time it is processing is when it is run forward as intended.
 
Last edited:
Well I understand that part, but there's something missing still in going down to N--either an explicit claim that N would produce my consciousness, or an explanation of something left out.

Assume that A is your simulation of a neuron's calculations--include whatever inputs happen to come in whenever/wherever they occur (just picking this for discussion--we could easily aggregate these... consider it a functional decomposition style analysis).

Now I build A', which is an equivalent simulation, using only clocked NAND's. So I want to use A' to define the "random" ordering, as follows. I have t NAND gates total. I will label all of them uniquely with integer numbers, from 1 to t. What is significant here is that I defined an ordering (let's further assume they have labeled inputs, and we always represent them in a certain order, so that we know 01 from 10, in case that matters).

During each clock cycle in A', p of my t NAND gates are computing 00, q are computing 01, r are computing 10, and s are computing 01. So let P1 through Pp be the NAND gates calculating 00, in order, Q1 through Qq be the ones calculating 01, in order, and so forth. For the next clock cycle, we do the same thing, but start with Pp+1, Qq+1, etc.

When we're done, I want to calculate the NAND gates in A', in this arbitrary order:
P1, Q1, R1, S1, P2, Q2, R2, S2, etc.

Once I hit the end of one of the P, Q, R, and S sequences, while I still have the other three to calculate, I'm just going to start running some other "program" in the background in a random order (effectively I want to just fill it in, so that I keep iterating).

Now I wind up with N, which is a program that's just calculating, in this order, (00)->1, (01)->1, (10)->1, (11)->0, and going back to start. So are you claiming that this program implements an out of order equivalent A? If we keep running it, will it produce a conscious mind--albeit an out of order one?

This is a good question. I will have to meditate on it.
 
I make a calculation and write down the answer an paper.

Result? A number on the paper.

I make another calculation based on that and write it down.

Result? Another number on a piece of paper.

No matter how long you do this you will end up with numbers on paper.

No consciousness, no time dilation.

Just numbers on paper.

I mean, what is the mechanism being proposed here?
 
Indeed. The halting problem raises its scaly head again.

Robin, you cannot, in general, know what an algorithm will do without running the algorithm.
But you will know what the algorithm will do the second time you run it.

At least in theory.
 
To be fair, I'm talking about conventional computers not holodecks. If you can get your computer to provide you oranges without any input of water, sucrose, proteins and all other constituent properties of oranges then I would be very impressed.

If your argument is idealism then I will concede the argument. It's theoretically possible to create The Thirteenth Floor. It's possible that our virtual hero could get stuck in a rainstorm and catch a cold, be miserable and have to run to the store for lozenges. Orange flavored ones.

I was certainly not arguing for idealism, as such a "theory" is unprovable and irrelevant to anything. I was simply curious as to what properties or processes you thought could not be simulated.
 
OK, here is a great business opportunity.

Write a simulation of a dynamo and get it to provide the electricity for the computer running the simulation.

Global warming solved in a flash.

Ok, fair enough. But inside the simulation the dynamo CAN work. Of course, the argument works both ways. Putting water in my computer won't make the virtual flowers grow.
 
I notice that you chose the seventh of eight definitions, when the very first definition corresponds directly to mine (though mine is more precise).

Why did you do that?

A typical human behaviour, I'd guess. Our annoying ability to cherry-pick. Usually not even intentionally.
 
I notice that you chose the seventh of eight definitions, when the very first definition corresponds directly to mine (though mine is more precise).

Why did you do that?


First of all, I chose definition 7 since the first several were redundant. Definition 7 sums up all the previous senses rather concisely; as in thoughts, feelings, and volition. I'll simply post them all, since you apparently think I'm trying to avoid some devastating point:

con⋅scious⋅ness  [kon-shuhs-nis]
–noun
1. the state of being conscious; awareness of one's own existence, sensations, thoughts, surroundings, etc.
2. the thoughts and feelings, collectively, of an individual or of an aggregate of people: the moral consciousness of a nation.
3. full activity of the mind and senses, as in waking life: to regain consciousness after fainting.
4. awareness of something for what it is; internal knowledge: consciousness of wrongdoing.
5. concern, interest, or acute awareness: class consciousness.
6. the mental activity of which a person is aware as contrasted with unconscious mental processes.
7. Philosophy. the mind or the mental faculties as characterized by thought, feelings, and volition.
—Idiom
8. raise one's consciousness, to increase one's awareness and understanding of one's own needs, behavior, attitudes, etc., esp. as a member of a particular social or political group.

As can be seen, each of those 8 senses of the word either refer to consciousness -of- particular object(s) or -attributes- of consciousness itself. They are not mutually exclusive, or separate concepts.

Second point: There is a difference between simply -processing- information and -awareness- of information. As I've already emphasized to you numerous times, our own physiology processes information [self-referential or otherwise] even when we are unconscious. Your definition of consciousness is not "more precise"; you've simply co-opted the label "consciousness" for another concept.
 
Last edited:
Err, that is not quite true -- I didn't say that it doesn't. I said that MoIP states that an algorithm will do what an algorithm will do.
Fair enough.

Now remember the original claim that a system capable of running an algorithm will behave as the MoIP says it will.

And the MoIP says that an algorithm will do what an algorithm will do.

So any system capable of running an algorithm should always do what an algorithm will do.

So the question is, can you design a system that is capable of running an algorithm, but which will do what an algorithm won't do?

And bear in mind that the Church-Turing thingy works both ways.
 
Last edited:
Ok, fair enough. But inside the simulation the dynamo CAN work. Of course, the argument works both ways. Putting water in my computer won't make the virtual flowers grow.

Thats pretty much the heart of the issue. Simulations are just models of a thing, they aren't ontologically identical to what they're intended to model.
 
Thats pretty much the heart of the issue. Simulations are just models of a thing, they aren't ontologically identical to what they're intended to model.
I agree that is the heart of the matter. You shouldn't confuse a model with the thing it is modelling.
 
Well, you made a post in the form of

1) <assertion that sounds absurd at face value>
But again, it is none of my doing that the assertion sounds absurd.
and included the "billions of years" in the <assertion that sounds absurd at face value>.
Which, as I pointed out before, was none of my doing
If you do not think there is anything strange about a billion year conscious state that seems like half a second to the consciousness, then why did you include the notion in that statement?
Because - and maybe you should read carefully this time - that was the proposition.

I was stating it. As plainly as I could. I don't think that is the absurd part.
It doesn't need to, but it can.
Well there is a chance that I might buy a disembodied conscious state produced by mental arithmetic and numbers on paper.

But you are asking me to buy that mental arithmetic and numbers on paper could produce time dilation? How?
 
Thats pretty much the heart of the issue. Simulations are just models of a thing, they aren't ontologically identical to what they're intended to model.

I don't know. Though real water can't make virtual flowers grow, and vice-versa, are programs modeling consciousness conscious, by definition ? I mean, virtual characters running in a simulation are running as far as the simulation is concerned. The difference is that conscious programs COULD and WOULD interact with reality.
 
Speed, complexity and other minds: appeals to intuition
Speed and complexity replies. The speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions write down all of the numbers. This brings the clarity of Searle's [Robin's] intuition into doubt.
That does not seem to be relevant.

I have already stipulated that it might take a billion years to desk check half a second of consciousness.

So now you seem to be implying that I had not considered the length of time.

Did you read my argument properly? If so then what is the point of what you quoted?
 
Last edited:
rocketdodger said:
Take a system that uses a true RNG, and for every instance of a random number generated, simply create a Turing machine that features it. That is, if the first use of the RNG is like "If RNG(0) is > B, branch" you can put "If A > B move left" in the Turing machine, where A = RNG(0) (zero meaning simply the first call).
But where does the machine get A? If it is from a true RNG, then it's an augmented Turing machine. If it's from a list of true random numbers on the tape, then how was that list initialized? It can't be an infinite list, because we can't initialize an infinite number of values.

~~ Paul
 

Back
Top Bottom