• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

Say, why do we need to bother running the single instructions in the processors flung across the universe? After all, the sequence of input states for each processor already represents all the results except the final one. Just add a NOP instruction at the end and we've got them all.

The sequence of input states embodies the consciousness. Now how does this differ, if at all, from the transporter-photographed horse?

Does this make it easier for everyone? :D

~~ Paul
 
I'm not sure why you think you two are disagreeing.

The initial state of the processor in each probe is based on the run up to that point. If you then execute a single instruction on each processor, in order, you will produce the same results as the initial run, including consciousness.

What am I not seeing?

~~ Paul

You are not seeing that Robin is asserting that each probe is somehow linked to the previous one, such that the entire operation is a single consciousness distributed across space.

This is not true -- I can remove an arbitrary number of probes prior to instruction X and the consciousness will still exist at probe X, just like it did before.

Because the state that probe X operates on comes from Run2, not the other probes in Run3.
 
Say, why do we need to bother running the single instructions in the processors flung across the universe? After all, the sequence of input states for each processor already represents all the results except the final one. Just add a NOP instruction at the end and we've got them all.

The sequence of input states embodies the consciousness. Now how does this differ, if at all, from the transporter-photographed horse?

Does this make it easier for everyone? :D

~~ Paul

Or this -- that is another illustration of how the probes have nothing to do with each other.
 
What is wrong with organic chemistry of neurons, is it broken? Or is this a definitional thing again?

Or do you just have something else you are trying to say?

Just tell me where to find "consciousness" in my organic chemistry textbook. Or the maths book, or the physics book, for that matter.

There is no physical theory. After a long time, it's been moved on to a functional theory being sufficient, which I cannot accept. Solid functional theories collapse to physical theories.
 
So you're not conscious if no external observer is around? Just to be clear, you're actually claiming this?



And nothing to do with whether I'm conscious or not. Your position has the absurd result that one falls in and out of consciousness depending on if someone leaves the room or not. You guys can't really believe this.



Do you think the label "consciousness" and consciousness itself are the same thing? That if a person never learned the word "conscious" they would be unconscious? Seriously?



And if I only knew Spanish, I would not be labelling it "consciousness" either. Are Mexicans unconscious? :rolleyes:

Labels and the things being labelled, again, are two entirely different things.

Are feral children conscious? Was Helen Keller unconscious until Anne Sullivan showed up?

Someone that does not speak English, and instead speaks Spanish, does not know about consciousness. They instead know about conocimiento. Or if they speak German, Bewusstsein.

If you were raised by wolves, and had no words, you would not know about the word consciousness. You would only know the awareness of the world that English speakers label consciousness.
 
Your example program was also unduly complicated to go through in a forum discussion, so let me fix that.

Let's focus on two instructions: X and Y. X will involve a store to a piece of memory. Y involves a fetch from that location. No store instructions exist in between.

To give the processor something to do, let's have three registers--R1, R2, and R3. X stores R3 at memory location R1+R2. Y fetches what is at memory location R1 and stores it into R2. To illustrate something coherent going on, we'll have Y fetch what X stored. More precisely, X precedes Y, and there are no stores to the specified memory address between X and Y. Arbitrarily, I'm also going to say there are a few instructions between X and Y.

Let's also suppose this is a straightforward case, where our expected side effects are considered to be complete by the execution of the next instruction. In other words, if we do a fetch at step k into some register r, register r has the expected value at step k+1.

So let's say at X, R1 is 10, R2 is 30, and R3 is 12. So the program is supposed to store the value 12 at address 40. At Y, R1 is 40, so it's supposed to fetch what is at address 40 and store it into R2. By definition of memory on an imperative machine, R2 would be 12.

For Run2, the same thing. Only we're storing the state of all of the registers.

Now for Run3...

Between steps X-1 and X, something was put into the registers. By step X, R1 is 10, R2 is 30, and R3 is 12. The processor adds 10 to 30 and gets 40, so it takes 12 and performs a write to address 40. It turns on all of the lines, enables the bus, whatever--and 12 is supposed to go in. It does.

Then between X and X+1, everything is erased. Fine. But the registers are loaded, the opcode fetched, and everything put in, and the processor is in a particular state--the same state, mind you, that it was in for X+1 at Runs 1 and 2.

Now just prior to step Y, everything is erased, and then a fetch is performed from this cache. R1 gets the value of 40 at this fetch. At step Y, the machine is supposed to retrieve what is at memory location 40, and store it in R2. So it performs this operation. It turns the lines on, enables the bus for a read, and so forth. The value 0 dutifully flows in. So now R2 has 0 in it... assuming there's enough time to settle. And we proceed to step Y+1.

But before doing so, there's that fetch from the cache. That fetch takes R2 and puts the value 12 into it. It loads in the opcode it's supposed to load in. And now we're at step Y+1, and we're, by definition of your problem, in the same state as Run2 (and Run1) was in Y+1, within the processor.

It appears to me, that if you look at memory in terms of a black box that gives you certain guarantees, which have to be met in order to proceed through the algorithm, and take a processor's eye view, then in Run1, Run2, and Run3, the black box is performing its required duties, so the processor is not crippled in its duties.

And nothing I said here is incorrect, nor should it be controversial. Agree?
Disagree. I agree that Run3 is different than Run2. You defined it differently. But I disagree that you are saying why it's different. Here's why. Here is what you are saying:

However, the phrase I have highlighted above is true for:
  • Run1 at X-1
  • Run1 at X
  • Run1 at X+1
  • Run1 at Y-1
  • Run1 at Y
  • Run1 at Y+1
  • Run2 at X-1
  • Run2 at X
  • Run2 at X+1
  • Run2 at Y-1
  • Run2 at Y
  • Run2 at Y+1
  • Run3 at X-1
  • Run3 at X
  • Run3 at X+1
  • Run3 at Y-1
  • Run3 at Y
  • Run3 at Y+1
...because store and fetch are black box operations. The processor certainly doesn't run over to the memory and drop the value in. It just flips some wires to address something, flips some other wire to connect some register to it, and flips another wire to signal that a write should be performed. Things happen. Stuff is done. And it's "right" if and only if that black box does what memory does once you do the same thing with the read line set instead of the write one.

Now, given that your highlighted phrase is true for all of the above, I'm having problems finding out where this difference is that you're trying to point out.

That's what we in the business call "the same".
Yep.

Yep.

Now the above sentence is so muddled and confused I can hardly make sense out of it.

Part 1, before the dash:

The processor cycle, let's say, is step Y. X is one of the rest of the steps. Y isn't in total isolation of X. What happens at Y should depend on X. If it doesn't, that means that memory is failing to supply its black box guarantees that we rely on in order to run the algorithm.

And yes, that is what I'm claiming.

Part 2, after the dash:

The set of instructions is the entire Run1. I don't know what you mean by "something". But saying that Y isn't isolated from X says absolutely nothing about Run3 being different than Run1.

Furthermore, I thought you said you were showing what the difference is.

But regardless, part 2 being completely different than part 1, and this "showing a difference" thing somehow mutating within your post into "same thing", and "set" being the same as "cycle", etc... I cannot make a lick of sense out of the aforementioned quoted text. Could you please help?

Robin is saying they are the same (in that each run produces a unique instance of the consciousness) because the algorithms are the same.

Pixy and I are saying no -- there is really no such thing as a real, physical, "black box." The algorithm for Run3 must be different from Run2, because it involves saving states of Run2 and loading them in Run3. Those steps are not included in the algorithm of Run2. And the effect of changing the algorithm is to make the majority of Run3 actually occur during Run2.

Which is why I said it would be the "same" consciousness as Run2, not a truly unique instance (as with Run1 vs Run2). If everything prior to operation X occured during Run2, then clearly everything prior to operation X was part of the Run2 instance, not the Run3 instance.
 
Last edited:
When debugging segments of machine code, a frequent technique is to run a particular segment of code with a data value patched in instead of as a result of some other operation - perhaps because the code that writes the data hasn't been written yet. The reason we can do this is because the code runs in exactly the same way regardless of the source of the data.

That's the key to understanding how computers work. Each segment works entirely independently of the next. You can save the state of a system, reload it, or even key in the state by hand. (Impractical now, but that's how programs used to be loaded in the good old days).

The idea that a "system" "knows" where it gets its data from is just more mysticism and anthropomorphism.

Fantastic.

Since you are correct, I sure am glad that I haven't said anything to the contrary.

I have said that the instance of an algorithm would or would not be the same as some other instance.

I haven't said anything about the consciousness that the algorithm represents knowing anything about whether it was or was not the same as any other instance.

In fact, I clearly stated that the idea of a consciousness knowing it was not some other consciousness is a logical contradiction.

Way to pay attention to the discussion!
 
Let's say we rig up a Star Trek transporter so that it can make a copy of an object without destroying the object. Now let's configure it so that it can make 100 copies per second.

Out in a large field, send a horse galloping past the transporter and have it make 100 copies of the horse and array them across the field. Now we have "stills" of 1 second's worth of running horse.

Here's the question: Does this series of horse copies still embody running-ness?

~~ Paul

That isn't the question.

The question is whether each of those copies is the same instance of running-ness, or a separate instance of running-ness.
 
rocketdodger said:
This is not true -- I can remove an arbitrary number of probes prior to instruction X and the consciousness will still exist at probe X, just like it did before.

Because the state that probe X operates on comes from Run2, not the other probes in Run3.
I don't see how consciousness can exist in a single probe. Consciousness is the sequence of events represented by all the probes (or at least some subset of them). They aren't communicating, but their states arrayed throughout space is the consciousness.

If it is not the case that we need a sequence of states to produce consciousness, then every particle in the universe is conscious.

~~ Paul
 
Last edited:
rocketdodger said:
The question is whether each of those copies is the same instance of running-ness, or a separate instance of running-ness.
Each individual copy has no running-ness at all. It is the sequence of copies that embodies running-ness, assuming we are willing to grant that there is any running-ness in the first place.

~~ Paul
 
Just tell me where to find "consciousness" in my organic chemistry textbook. Or the maths book, or the physics book, for that matter.

There is no physical theory. After a long time, it's been moved on to a functional theory being sufficient, which I cannot accept. Solid functional theories collapse to physical theories.


Hmm, are they discussing neurons in your textbook? Is neurology not good enough either?
 
Hmm, are they discussing neurons in your textbook? Is neurology not good enough either?

What is it about particular neural processes that causes some sensory input to be felt as a particular sensation or experience? What physical property differentiates the quality of these experiences? How is this process expressed thru the biochemistry of neurons? What part of the system actually has the experience(s) and what are the relevant physical properties of this portion of the system that causes it to be subjectively sensible?

These are things I would like to see answered in my textbook.
 
Last edited:
Robin is saying they are the same (in that each run produces a unique instance of the consciousness) because the algorithms are the same.
The part in parentheses isn't really of concern for me one way or the other. As long as it's coherent and makes sense, I'm fine with it. Any disagreements I chock up to a difference in language, until such a point as it's bound to something specific enough to address.

But the word "algorithm" gets into technical language. That I can talk to.
Pixy and I are saying no -- there is really no such thing as a real, physical, "black box." The algorithm for Run3 must be different from Run2, because it involves saving states of Run2 and loading them in Run3.
The runs are different, but why would you say the algorithm is different? The words "same" and "different" are contextual... sometimes five nickels is the same as a quarter. Sometimes five nickels are different than a quarter.

In order to know if we have a real disagreement, I want to know what context you're using to judge if algorithms are different. With that in mind, suppose that I grab a sheet of paper and perform a sieve of Eratosthenes for the numbers 1 through 100 (skip 1, point at 2, cross out every 2 numbers past that, yada yada). And let's compare that to a BASIC program performing a sieve of Eratosthenes on a VIC-20.

I'm definitely not a VIC-20, and I'm not even using BASIC. But we're going to go through the same steps and produce the same result. So the VIC-20 is five nickels, and I'm a quarter. In the context of running an algorithm, are we performing the same algorithm?

If not, suppose that on Wednesday I perform a sieve of Eratosthenes on the numbers 1 through 100. And suppose that on Thursday I perform a sieve of Eratosthenes on the numbers 1 through 100. The brain is very complex, and I'm constantly changing, and I'm pretty sure I'm not doing exactly the same thing Wednesday, in my brain, as I'm doing Thursday. So on Wednesday I am five nickels. And on Thursday I'm a quarter. In the context of running an algorithm, am I performing the same algorithm on Wednesday that I perform on Thursday?
 
AkuManiMani said:
What is it about particular neural processes that causes some sensory input to to be felt as a particular sensation or experience? What physical property differentiates the quality of these experiences? How is this process expressed thru the biochemistry of neurons? What part of the system actually has the experience(s) and what are the relevant physical properties of this portion of the system that causes it to be subjectively sensible?

These are things I would like to see answered in my textbook.
Me, too. In the meantime, no need to introduce all-of-a-piece, fill-the-gap consciousness-entities. Patience.

~~ Paul
 
Last edited:
Me, too. In the meantime, no need to introduce all-of-a-piece, fill-the-gap consciousness-entities.

~~ Paul

I'm not offering an explanatory gap filler. All I'm doing is pointing to a real phenomenon [i.e. consciousness], describing features of it, and naming those features.

When we are conscious we have experiences, and those experiences are are made up of a wide range and combination of subjective qualities. When we are unconscious there is no experience of any subjective qualities. It just so happens that there's a word already in existence in the English language for such subjective qualities: qualia.

What I'm taking issue with is that there are many participating in this discussion who not only refuse to address the problem that we currently have no scientific explanation of subjective experience, they choose to completely ignore subjectivity all together and claim they're explaining consciousness.
 
Last edited:

Back
Top Bottom