• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

...snip...

What is it that can tell the the context switching routines have been monkeyed with?

...snip...

What is it that knows "aha, that was not the same 5 that I calculated in the last step"?

Any external observer can see it, and it is contradictory for the consciousness itself to know -- because then it wouldn't be two different consciousnesses. By definition you only have access to the private behavior of your self and nothing else.

So there is no issue here.

You mean in the sense that it was the "same" Fibonacci sequence calculated in your example?

Actually, that was a bad example because you can run such an algorithm with zero input.

Forget I said that, and instead suppose the algorithm is simply to take the value X and add it to the value of an input I at each step and assign the result to X, with the initial value of X being anything you want.

It is the "same" if and only if the initial value, and the input I at each step, is the same.

So my point is that if anything in the environment changes that leads to an input being different than it was before, the consciousness of Run1 and Run2 will be different at every step past where the input deviation occurs. However, Run3 will only be different between when the deviation occurs and the next context switch -- because after the context switch it is using the old results of Run2.
 
Robin said:
And then, based on the caesium clocks, they start to run the instructions from Run3 in order - so for example op1 is run on one computer and after a suitable wait the caesium clock on another device at least a light year away runs op2 and so on until it gets back to the original device and starts again.

The same physical instructions are run in precisely the same order as in Run3, just a little further apart in space.

So if Run3 produced consciousness, then so should Run4
Yes, although without motor outputs it would be tough to tell.

rocketdodger said:
If Paul is Run4, he is actually <all of Run2 up until the instruction to be carried out on device D> + the next step in the algorithm, to be carried out on device D.

And the same goes for every device. A single instance of Paul isn't "distributed" among the devices, each device is a separate instance of Paul. While -- like I explained -- "most" of Paul (as in, the majority of the algorithm) was already run -- as Run2.
Sorry, you lost me.

~~ Paul
 
When I was a child of about ages 6--14, I would occasionally suffer from "clock speed syndrome." All of a sudden things would seem to be happening at a faster speed. I felt like I was walking faster, talking faster, seeing the world move around faster. Everything that other people did would seem faster, too. However, no one noticed, which means that I was not actually moving faster than usual or talking faster than usual. The episodes lasted a few minutes. At the time this was attributed to my brain growing faster than my skull, although that might be silly.

The brain's clock speeds are plastic.

http://www.steadyhealth.com/unusual__fast_feeling__in_my_head_t160555.html

~~ Paul
 
Last edited:
Yes, although without motor outputs it would be tough to tell.
Why? Run1 didn't have motor outputs. So still there is no difference.

But again, am I really such a woo for thinking that maybe my conscious experience does not somehow defy the laws of physics?

If you - this moment you are experiencing right now, could not be Run4, then it couldn't be Run1.
 
Any external observer can see it,
So what?

The external observer is not conditioning whether or not the algorithm produces consciousness.

What is it in the system or universe that knows that the 5 used in instruction n was not the 5 saved in instruction n-1 and can use this information to decide whether or not to produce consciousness?
So there is no issue here.
Big issue here.
Actually, that was a bad example because you can run such an algorithm with zero input.

Forget I said that, and instead suppose the algorithm is simply to take the value X and add it to the value of an input I at each step and assign the result to X, with the initial value of X being anything you want.

It is the "same" if and only if the initial value, and the input I at each step, is the same.
But we are talking about a program run each time with the same initial values.
 
Last edited:
When I was a child of about ages 6--14, I would occasionally suffer from "clock speed syndrome." All of a sudden things would seem to be happening at a faster speed. I felt like I was walking faster, talking faster, seeing the world move around faster. Everything that other people did would seem faster, too. However, no one noticed, which means that I was not actually moving faster than usual or talking faster than usual. The episodes lasted a few minutes. At the time this was attributed to my brain growing faster than my skull, although that might be silly.

The brain's clock speeds are plastic.

http://www.steadyhealth.com/unusual__fast_feeling__in_my_head_t160555.html

~~ Paul
I have heard of that, it is quite fascinating. Must be horrible to experience though.

But I already know that the brain clock speeds are plastic as I said to rocketdodger - I have no problem with a moment of consciousness that takes 1 billion years seeming like half a second.

The problem is, say the initial run takes 9 months. It generates an apparent half second of conscious experience. No problem. But when does that half second of conscious experience physically take place?

If you say it takes place in that nine month run then my experience of it involved pulling together information that would take many years to bring together. Does the moment of consciousness not emerge until years after the run has completed?

But in that case there is still a problem as there is no physical communication between the devices.

So if my mind is an algorithm my consciousness must involve some sort of non-physical communication.
 
Last edited:
Sorry, you lost me.

First, run a program and save the entire state of the system at every instruction.

Then, configure some space probes (one probe for each instruction) and load them up with one of those saved states, such that each probe will run a single instruction from the original program on the state that was the result of the prior instruction during the original run.

Robin is saying that because the instructions are the same, and in the same order, as the original program, and they all operate on the same states as they do in the original program, that the program is a single instance that is now distributed throughout space.

I am saying that no, each probe now represents a separate instance of the program which happened to branch off of the orginal program when the state the probe is initalized with was saved.

Because the steps that were taken, in physical reality, to produce each of the states must also be accounted for. You can't just say "oh, I pulled this state out of thin air" because you didn't -- you ran the program to produce that state.

So in other words, the sequence of behaviors that include probe #345 -- corresponding to instruction 345 executing according to state 344 -- is not linked in any way to probe #344. Probe #344 has nothing to do with probe #345, other than it just happens to run an instruction that corresponds to the one previous to probe #345. Instead, the sequence of behaviors that includes probe #345 is something like this:

1) Instruction 1, operating on state 0, of Run2, occuring on machine #1.
2) Instruction 2, operating on state 1, of Run2, occuring on machine #1.
3) Instruction 3, operating on state 2, of Run2, occuring on machine #1.
.
.
.
344) Instruction 344, operating on state 343, of Run2, occuring on machine #1
344a) Saving the state of machine #1
344b) Loading state 344 onto the probe, sending the probe into space, whatever
.
.
.
345) Instruction 345, operating on state 344, of Run3, occuring on probe #345.

Nowhere do you see any other probes.
 
So what?

The external observer is not conditioning whether or not the algorithm produces consciousness.

Uh, yeah, they are.

The algorithm just does what it does. That includes you and me. How do you know you are "really" conscious?

The "conscious" comes into play when an external observer labels our behavior as such.

But we are talking about a program run each time with the same initial values.

Maybe that is the problem -- you shouldn't be. It seems like it is difficult for you to accept that two separate instances of the algorithm would be an identical consciousness.

That's because never, ever, ever, ever, in reality is the algorithm actually the same when it comes to you and me.

If you want to think of this in terms of human experience, then you can't impose machine conditions on the scenarios. If you want to impose machine conditions, then you can't think in terms of human experience.
 
Uh, yeah, they are.

The algorithm just does what it does. That includes you and me. How do you know you are "really" conscious?

You have doubts you're conscious?

The "conscious" comes into play when an external observer labels our behavior as such.

An external observer has nothing to do with whtether I'm conscious or not.
 
Woah...I'm sorry, but this post completely threw me for a loop.
That seems to be remarkably easy to do, then.

Very recently you emphatically stated that there are no such things a qualities, and that qualia are "fairy-tales".
Qualia are, specifically, an attempt to insert magic into explanations of consciousnes - such as your absurd suggestion that they are "elementary". All you are claiming here is "It's magic! It is it is it is!"

Elementary what? That's not an explanation, that's an avoidance of explanation, an avoidance of existing explanations, and an assertion that explanation is impossible.

Since we have explanations that you refuse to address in any way other than expressions of personal incredulity, your insistence that no explanation is possible falls flat.

Now you're claiming that not only are SR-systems in some sense "magical" but that they are -qualitatively- different than all others?? Uhm...Do you care to clarify? :confused:
I've already clarified this dozens of times. If you were to read my posts rather than attempting to twist them to fit your preconceptions, you'd already know this.

A self-referential system can do things that a non-self-referential system cannot; specifically, it can reference itself. That opens up a whole new class of behaviours. Introspection, self-modifying algorithms, that sort of thing. The brain does this, and so do many computer programs.

And no, a regulatory system is not self-referential. Go read Hofstadter; I'm not about to waste 600 pages on you.

Now, back to ignore you go, until you actually have something to say.
 
And somehow out of a collection of qualions full-blown human consciousness emerges. I don't get warm and fuzzy over this any more than I do over consciousness emerging from conventional brain function.
Rather less, in fact, since we actually have brains, and no-one has ever even provided a coherent definition of an "elementary" quale.
 
Under some definitions of "conscious", yes. And so should you.

Why should I doubt I'm conscious? What definition of "conscious" are you using? Do you think you might be a p-zombie?


Then you don't know much about behaviourism. I thought I mentioned it a short while ago.

Behaviorism has been on life support for decades, but anyway, let's explore this line of reasoning. I said:

An external observer has nothing to do with whtether I'm conscious or not

And you replied
Then you don't know much about behaviourism

So tell me exactly what you mean by this. Do you think an external observer is a necessary condition for consciousness? That we are unconscious unless something is observing us?
 
An external observer has nothing to do with whtether I'm conscious or not.

Wrong.

An external observer has nothing to do with whether you exhibit the behaviors you currently label as consciousness.

But an external observer has everything to do with you labeling those behaviors as consciousnes. Trivial proof -- if you were raised by wolves, you would not be labeling it consciousness. You wouldn't even know of the word, actually.
 

So you're not conscious if no external observer is around? Just to be clear, you're actually claiming this?

An external observer has nothing to do with whether you exhibit the behaviors you currently label as consciousness.

And nothing to do with whether I'm conscious or not. Your position has the absurd result that one falls in and out of consciousness depending on if someone leaves the room or not. You guys can't really believe this.

But an external observer has everything to do with you labeling those behaviors as consciousnes.

Do you think the label "consciousness" and consciousness itself are the same thing? That if a person never learned the word "conscious" they would be unconscious? Seriously?

Trivial proof -- if you were raised by wolves, you would not be labeling it consciousness. You wouldn't even know of the word, actually.

And if I only knew Spanish, I would not be labelling it "consciousness" either. Are Mexicans unconscious? :rolleyes:

Labels and the things being labelled, again, are two entirely different things.

Are feral children conscious? Was Helen Keller unconscious until Anne Sullivan showed up?
 
AKuManiMani said:
Very recently you emphatically stated that there are no such things a qualities, and that qualia are "fairy-tales".

Qualia are, specifically, an attempt to insert magic into explanations of consciousnes

Recap: I provided the dictionary definition of the word "qualia"; as in sense-data or feelings having distinctive qualities.

You responded that there are no -qualities-, and that you only experience -quantities-. Not long after you had this to say to HypnoPsi:

HypnoPsi said:
Pixy's waving around the words "self-referencing" like they're magic

In a sense, they are magic. Self-referencing system are qualitatively different from non-self-referencing systems.

So, not only do you state that SRIPs are in some way magical, you go on to state that they are -qualitatively- different from other processes -- in direct contradiction to your earlier assertion that there is are no such things a qualities.


- such as your absurd suggestion that they are "elementary".

[...]

Elementary what? That's not an explanation, that's an avoidance of explanation, an avoidance of existing explanations, and an assertion that explanation is impossible.

Eh? The basic component of a conscious experience is "a sense-datum or feeling having a distinctive quality." If an entity does not posses this basic capacity then it is not conscious, hence why I stated that it's "elementary". This is not an assertion that no explanation is possible, its simply stating an essential feature of consciousness.

All you are claiming here is "It's magic! It is it is it is!"

"In a sense, they are magic."

If you say so, Pixy...


Since we have explanations that you refuse to address in any way other than expressions of personal incredulity, your insistence that no explanation is possible falls flat.

I've never insisted such a thing. In fact, I've done nothing but argue the opposite.

I've already clarified this dozens of times. If you were to read my posts rather than attempting to twist them to fit your preconceptions, you'd already know this.

I've no need to twist your posts. Direct quotes do the trick just fine.

A self-referential system can do things that a non-self-referential system cannot; specifically, it can reference itself. That opens up a whole new class of behaviours. Introspection, self-modifying algorithms, that sort of thing. The brain does this, and so do many computer programs.

Yea, so?


And no, a regulatory system is not self-referential. Go read Hofstadter; I'm not about to waste 600 pages on you.

Now, back to ignore you go, until you actually have something to say.

Bye.
 
Last edited:
Information is simply matter/energy that some other system of matter/energy uses to represent some other system of matter/energy.

Okay so basically, in your definition, information is a representation of another system. By this definition, to process information is to process representations, not the object(s) of it.
 
Last edited:
When I was a child of about ages 6--14, I would occasionally suffer from "clock speed syndrome." All of a sudden things would seem to be happening at a faster speed. I felt like I was walking faster, talking faster, seeing the world move around faster. Everything that other people did would seem faster, too. However, no one noticed, which means that I was not actually moving faster than usual or talking faster than usual. The episodes lasted a few minutes. At the time this was attributed to my brain growing faster than my skull, although that might be silly.

The brain's clock speeds are plastic.

http://www.steadyhealth.com/unusual__fast_feeling__in_my_head_t160555.html

~~ Paul

The only time I've ever personally experience anything so dramatic was while I was in a semi-waking state. I was experiencing sleep paralysis but I was lucid enough to be aware of my surroundings. It seemed as if everything was in fast forward.
 
Last edited:

Back
Top Bottom