• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

I didn't mean piece literally. What is the simplest conscious thing that people do? Put another way, what is the evolutionarily oldest conscious brain function?

If you can't break it down, I daresay you can't define it. And if you can't define it, what are we arguing about?

~~ Paul

Or "what are we?" even.

If you don't have any subjective experiences, then you should skip this discussion.
 
For starters, I don't know what you mean by a physical effect. Do you mean like generating heat? If so, then there is a whole new physics going on here, or some sort of dualistic process.

~~ Paul

Obviously if consciousness is to be explained by physical theory, we need new physics. That's by definition, since physics has nothing to say about it at present. New physics is nothing to be frightened about.
 
And is this heat-production-like consciousness epiphenomenal?


Not to mention discovered in the first place. Lots of peering at neurons doesn't seem to have done the trick so far.

So what are the attributes of this physical effect that makes us conscious? Why does this make you feel any more enlightened than thinking of conscious as a conventional brain function?

~~ Paul

If there's no theory there's no theory. There's no point in just looking at something the brain does and guessing.
 
It comes down to how we define consciousness. If we define it by behaviours - by how it responds to information - then a static representation of a conscious system isn't itself conscious, because it can't respond to anything. You have to actually instantiate the algorithm in some way.

Yeah I understand that.

I am not quite sure what I am thinking :) Doesn't matter anyway, since all of us agree on the important stuff.
 
Let me give an example. Take a simple processor model and the following program:
Code:
IP=0, IR = 0, FL=0, A=0, B=0
00 LD IR 0F
01 XOR A A
02 ST A 14
03 LD A (14)
04 LD B (0E)
05 CMP A B
06 JE 20
07 INC IR
08 LD B (IR)
09 ADD A B
0A ST A (21)
0B INC IR
0C JMP 03
0D RET
0E FF
0F 01
10 21
11 30
12 21
13 FF
14 00
I can implement this using a sparse array for memory so that it only holds the values actually required, then I could run this once and package it up in individual packets which hold the code, the necessary values and the registers:
Code:
IP=08, IR=11, FL=0, A=22, B=0
08 LD B (IR)
11 30
and swap each one in in order. To the processor this would be no different to context switching or paging.

So this looks no different to the processor than the first run.

So why would there be a difference?

If the mind is an algorithm then I could run an algorithm equivalent to the mind on this processor.

So why would the first run produce consciousness, but the behaviourally identical second run, using the packaged instructions, not produce consciousness?

I can see from your discussion with yy2bggggs that you have some fundamental misunderstandings about the nature of computing.

This post illustrates that -- why do you think the only important part of the system is the processor?

If the system writes a location in memory as a result of calculation A, and then uses it in calculation B, in the same run, that is very different than calculation B simply being fed the pre-packaged results from calculation A in a different run. Yes, the processor doesn't know any better, but the processor isn't the full extent of the system in question.

In such a case, the notion of a "run" is muddied because you are using things calculated in a previous run as if they were calculated in the current run -- so in fact there is only a single "run."

This is trivially simple to see, and I don't understand why you are missing it. If you have to interact with the system in a different way than you did initially, then the overall algorithm is necessarily different as well.

And it is true that the details of the algorithm might not matter -- this is usually the case. But for consciousness, we are dealing with self reference and thus any time you muck with the information -- as you do for run3 and run4 -- you possibly corrupt it.
 
Last edited:
Not sure I see much of a difference. Granted, I'm not good at chess at all, but when playing any other game, I usually respond to a situation by evaluating the outcomes of a certain action (assigning it a value) before making it, and often thinking many moves ahead.

Now, the computer does it several moves ahead of me, and faster, but I'm not sure I see a difference in type.

See my response to your question, Belz. I explain it in more detail.
 
M'kay. What reason makes you suspect that conscious experience is a computational function carried out by the brain rather than a physical effect of brain activity?

Because you can't have a discussion like this with an individual neuron.

And I don't know of any "physical effect" that is present in some cases and absent in others. All the "physical effects" I know of are always present, and when you put a bunch of the stuff creating the effect together, it simply increases in magnitude.

If you have any counter-examples I would be glad to hear them.

If not, the only other possibility is that it is functional, and that implies that it is computational.
 
Last edited:
I'm not talking about consciousness. I'm talking about how information flows through a system that is performing an algorithm.
But the remark was the context of a specific example.

In my example Run1 and Run2 are runs of the algorithm normally on a computer.

But during Run2 at each execution the register state and before data values used for that specific instruction are saved along with the op code.

At Run3 these are all run seperately but in order, as context switches.

So precisely the same instructions as in Run2 were run in precisely the same order on precisely the same before data items and precisely the same register states.

So I am saying why is this run different from the last.

Rocketdodger and PixyMisa are saying that it is because the algorithm is self-referential or it can inspect itself or something and therefore the third physically identical run will behave differently.

I am saying, no, all that CPU sees when it processes an instruction is the register state, the instruction itself and the data item's it is asked to process.

It has no way of knowing or caring where that data item came from. That is what I mean by in isolation.

So the changed data are ditched after each operation, and the cached data from the previous run is swapped in, which will of course be the same.

So the next instruction comes along and duly processes it, just the same as if it had read from the memory item or register it had changed in the last step.

Physically there is no difference.

Now you are chiming in and agreeing with PixyMisa and rocketdodger that this processor cycle is not in isolation from the rest of the steps in the program - that something will be different in this physically identical set of instructions.
 
I can see from your discussion with yy2bggggs that you have some fundamental misunderstandings about the nature of computing.
In the old days I used to write machine code directly in a hex editor, I have worked on operating internals, I can write assemblers and compilers, I know this stuff and not just from a theoretical point of view.

You guys, on the other hand, seem to have a pretty strange idea of how a computer works.
This post illustrates that -- why do you think the only important part of the system is the processor?
Never said it was, I was just pointing out what a run of an algorithm actually means.
If the system writes a location in memory as a result of calculation A, and then uses it in calculation B, in the same run, that is very different than calculation B simply being fed the pre-packaged results from calculation A in a different run.
How?
Yes, the processor doesn't know any better, but the processor isn't the full extent of the system in question.
Well you tell me, what in the system knows better? Just what is it that knows that one set of instructions should give a different result.
In such a case, the notion of a "run" is muddied because you are using things calculated in a previous run as if they were calculated in the current run -- so in fact there is only a single "run."
On the contrary I am being as precise as possible about what a "run" means.

Again you have to tell me what is it that knows that this physically identical set of instructions is not the real "run"?
This is trivially simple to see, and I don't understand why you are missing it.
I feel precisely the same way, roles reversed
If you have to interact with the system in a different way than you did initially, then the overall algorithm is necessarily different as well.
But that is not true for any other algorithm that uses fixed data. If I animated a little man walking across the screen in these ways the result would be the same.

Between Run2 and Run3 the observed behaviour of the modelled brain is identical.

The instructions presented to the processor are identical

What is different? How does the algorithm or anything know that the changes to the context switching routines should change the result of the algorithm.
And it is true that the details of the algorithm might not matter -- this is usually the case. But for consciousness, we are dealing with self reference and thus any time you muck with the information -- as you do for run3 and run4 -- you possibly corrupt it.
Exactly the same instructions in exactly the same order with exactly the same before register values and exactly the same before data items and exactly the same operation with exactly the same results.
 
Last edited:
At the least there is some sort of mapping that can be done. However you asserted that a computer must be inherently dealing with them when a brain must not.
No, I never said that.

Why does everybody change the argument?

A computer probably isn't inherently algorithmic either.
 
Ooh, this is fun.

But I don't understand how this saving mechanism works. What exactly is it saving and what does it mean to run the saved states?

Oh, I think I get it. It saves all the source values for each instruction in a packet. Then it runs the packets in order, discarding their outputs. Of course, it can't discard any outputs to I/O devices.
I am not sure why you think that makes a difference.
Then yes, the consciousness-relevant behavior would be the same.
Which is a little vague.

For any simulation you have to ask rocketdodger's question, how can you tell you are not that simulation.

So the specific behaviour I am asking about is the moment you are experiencing right now.
You know, part of our confusion may be related to what each of us is thinking the "consciousness of the simulation" actually is. Maybe we should talk about that. What is the simplest piece of consciousness a brain could have that we could simulate?
I don't think the question could really have an answer.

But any moment you experience, it might be the first moment you experience and the last you ever will if you are part of a simulation.

Even if you watch a clock's hand sweep a full hour you cannot then conclude "I know that I was conscious for an hour", because this still might be your first and last moment paired with the memory of hands having moved.

In other words you can't tell that you are not Run4

So this moment you are experiencing right now could be Run4.

You are holding up two hands and the information for hand 1 could be physically located at least a light year away and in complete isolation from the information from hand 2?

And yet you are seeing them apparently simultaneously.

It seems to me that you guys are in the uncomfortable position of asking if something can travel faster than light.
 
Last edited:
AkuManiMani said:
Hmm..I suppose that's like asking whats the simplest unit of light. The best answer we have right now is a quanta we call the photon; if its broken down any further it's no longer light. It seems to me that qualia are comparable to quanta in that they are elementary.
And somehow out of a collection of qualions full-blown human consciousness emerges. I don't get warm and fuzzy over this any more than I do over consciousness emerging from conventional brain function.

~~ Paul
 
Last edited:
westprog said:
Any implementation of a Turing machine can do more than a Turing machine, since it exists in the real world. The only thing a Turing machine can do is make marks on tape. Even a digital watch does more than that.
So it's the connection to the senses and motor system that makes the computation more powerful than a Turing machine?

Obviously if consciousness is to be explained by physical theory, we need new physics. That's by definition, since physics has nothing to say about it at present. New physics is nothing to be frightened about.
I'm not frightened. I'm just surprised that there has been no inkling of new physics over the past 200 years of studying the brain.

I get a bit of a god of the gaps feeling from this.

~~ Paul
 
Robin said:
In other words you can't tell that you are not Run4

So this moment you are experiencing right now could be Run4?

You are holding up two hands and the information for hand 1 could be physically located at least a light year away and in complete isolation from the information from hand 2?

And yet you are seeing them apparently simultaneously.

And the whole run for this moment might have only taken six months?

And you are now experiencing all that information seemingly at once?

It seems to me that you guys are in the uncomfortable position of asking if something can travel faster than light.
Sorry, no idea what you're talking about.

Is someone getting consciousness out of disconnected processes strewn across the universe?

As far as the clocking is concerned, I'll describe my childhood clock syndrome after I have some dinner.

~~ Paul
 
Last edited:
Exactly the same instructions in exactly the same order with exactly the same before register values and exactly the same before data items and exactly the same operation with exactly the same results.

Yes, except for all of the storage instructions and locations in Run2, and all of the switching and swapping and whatnot in Run3.

Why are you arbitrarily labeling all this extra stuff as 'not part of the system' when it clearly is?

Lets use an easier to grasp example.

Suppose you write a program that instantiates a basic algorithm when it is run -- say, calculating the Fibonacci sequence. Pretty easy, right?

So for Run1 you start with 1, and 1, and then you add the previous two results together to get the next result.

For Run2 you do the same and, just like you said, you save the results of each step.

Now, in Run3, everything is kosher until the first context switch -- say it occurs after the value 5 is calculated. When the CPU returns to Run3, you load in the results from Run2 at the current point in the algorithm. But now you have information that was calculated during Run2, not during Run3, being processed. And the same thing happens after the next context switch, and so on and so forth. Yes, you calculate 5 again, but it doesn't matter, because it has already been calculated and you are using the previous result.

So you are in effect doing nothing more than "re-running" Run2. Which is why I said it would be the same consciousness! All of the calculations have already been made, all you are doing now is producing small snapshots of that original consciousness during the brief times when Run3 has CPU time.

And the proof that it is the original consciousness is trivial -- if anything changes to desynchronize the state of Run3 from the saved state of Run2, it will not be reflected in Run3 past the next context switch. Of course, the same thing can be said between Run1 and Run2 as well -- they are an identical consciousness because the information is identical. But, if the input had been different, at least Run2 would have caught it and the results would be different. Not so for Run3 vs. Run2.

By adding the extra instructions, you alter the algorithm, such that it is now making use of different type of data than it was before. Before, any changes in the inputs at any step of the run would alter the rest of the run. Now, you are locked into the conditions of Run2.

Think about it -- if you were to expand the entire sequence of Run3, at an arbitrary location in Run3, the vast majority of it would actually have occured during Run2. Only the portion since the last context switch can rightly be said to have occured during Run3.
 
Last edited:
Sorry, no idea what you're talking about.

Is someone getting consciousness out of disconnected processes strewn across the universe?
Yes ... you.

That was Run4 mentioned earlier (I linked you to the entire argument when I first asked your opinion about it).

If Run3 produces consciousness and these packets are now independent, I distribute them onto millions of smaller computing devices fitted with caesium clocks and make sure that no one computer has any consecutive instructions, then I fire them into space in various directions until they are all at least a light year apart.

And then, based on the caesium clocks, they start to run the instructions from Run3 in order - so for example op1 is run on one computer and after a suitable wait the caesium clock on another device at least a light year away runs op2 and so on until it gets back to the original device and starts again.

The same physical instructions are run in precisely the same order as in Run3, just a little further apart in space.

So if Run3 produced consciousness, then so should Run4
 
Last edited:
Yes, except for all of the storage instructions and locations in Run2, and all of the switching and swapping and whatnot in Run3.
But I might run this on a number of systems all with different context switching routines, why should they make a difference?

Does the algorithm depend on the details of how the data was handled in between the steps?

What you are saying is that when this consciousness producing algorithm is run, the context switching and paging routines are all taken into account when the consciousness is produced?

What is it that can tell the the context switching routines have been monkeyed with?

Why are you arbitrarily labeling all this extra stuff as 'not part of the system' when it clearly is?
Show me where I labelled it "not part of the system".

But it might be different on a number of different systems, and the algorithm should run the same, if it is equivalent.

Again I ask, what is it that knows that the context switching routines have been monkeyed with?

What is it that knows "aha, that was not the same 5 that I calculated in the last step"?
Lets use an easier to grasp example.

Suppose you write a program that instantiates a basic algorithm when it is run -- say, calculating the Fibonacci sequence. Pretty easy, right?

So for Run1 you start with 1, and 1, and then you add the previous two results together to get the next result.

For Run2 you do the same and, just like you said, you save the results of each step.

Now, in Run3, everything is kosher until the first context switch -- say it occurs after the value 5 is calculated. When the CPU returns to Run3, you load in the results from Run2 at the current point in the algorithm. But now you have information that was calculated during Run2, not during Run3, being processed. And the same thing happens after the next context switch, and so on and so forth. Yes, you calculate 5 again, but it doesn't matter, because it has already been calculated and you are using the previous result.

So you are in effect doing nothing more than "re-running" Run2. Which is why I said it would be the same consciousness!
You mean in the sense that it was the "same" Fibonacci sequence calculated in your example?
 
Last edited:
In other words you can't tell that you are not Run4

So this moment you are experiencing right now could be Run4.

You are holding up two hands and the information for hand 1 could be physically located at least a light year away and in complete isolation from the information from hand 2?

And yet you are seeing them apparently simultaneously.

It seems to me that you guys are in the uncomfortable position of asking if something can travel faster than light.

No -- you are not understanding even your own example!

If Paul is Run4, he is actually <all of Run2 up until the instruction to be carried out on device D> + the next step in the algorithm, to be carried out on device D.

And the same goes for every device. A single instance of Paul isn't "distributed" among the devices, each device is a separate instance of Paul. While -- like I explained -- "most" of Paul (as in, the majority of the algorithm) was already run -- as Run2.
 
AkuManiMani said:
M'kay. What reason makes you suspect that conscious experience is a computational function carried out by the brain rather than a physical effect of brain activity?

Because you can't have a discussion like this with an individual neuron.


And I don't know of any "physical effect" that is present in some cases and absent in others. All the "physical effects" I know of are always present, and when you put a bunch of the stuff creating the effect together, it simply increases in magnitude.

If you have any counter-examples I would be glad to hear them.

The physical effect I'm referring to is sensibility; is the entity in question able to feel[/I,] or otherwise experience, in some way? Our own brains are able to perform highly sophisticated computational functions without there being any awareness accompanying that processing.


If not, the only other possibility is that it is functional, and that implies that it is computational.

Mm...Okay, I think I'm going to try a different approach to convey what I'm getting at when I speak of something being a "physical" effect, property, etc:

What, IYO, is the relationship between matter/energy and information?
 

Back
Top Bottom