• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Pulvinar, I'm sorry I got snappy there. I'm being run from pillar to post lately and I'm tired and just let myself lapse into rudeness. My apologies.
I actually didn't notice any rudeness from you (which means I don't notice my own, either...). I am enjoying this thread as it tests my thoughts on the subject.
 
The real question here, it seems to me, is the effect on the generation and maintenance of conscious awareness if data streams that are coherent in real time are drastically slowed so that signaling loses its coherence (from the point of view of the brain functions that are built to handle these data streams).

Are we perhaps using different definitions for coherent?

I interpret this to be talking about physical coherency of maintaining the phase relationship of waves which can easily be handled at any clock rate above zero.

Maybe the misunderstanding is in one of the other big words you keep using. If you are tied only to the physical world, you would recognize that all physical systems have natural resonant frequencies. But in the computer world, such physical properties don't exist until they are created by simulation. The simulation is time independent and can be run at any speed (including stopped) up to the limit of the computer. The simulation may be more aesthetically pleasing to us when run at real time speed but that doesn't change the results of the simulation.
 
My problem is with making logical errors based on that label, specifically the bizarre hypothesis that the logic of information processing is what causes the real-world phenomenon of conscious experience,

But this is exactly what the computational model of consciousness asserts -- consciousness is a result of the logic and only the logic. I.E. we should be able to produce consciousness on any substrate capable of instantiating the necessary logical functions.

Consciousness should be possible in a system of buckets and pulleys, or monks and quills, and even in a city full of humans that have no idea they are instantiating the logical functions for a consciousness one level above them.

And a corollary of that is that the system can be single stepped without loss of function.

But this is a moot point, because at the very least, we should be able to single step any system in the universe over discrete intervals of a single unit of planck time, and the system should behave exactly normal. Consciousness has nothing to do with it.
 
But this is a moot point, because at the very least, we should be able to single step any system in the universe over discrete intervals of a single unit of planck time, and the system should behave exactly normal. Consciousness has nothing to do with it.

For some reason Piggy keeps stipulating that in slowing the system down we're only slowing part of the physics, but not all of it. Then when the system breaks, he can claim that there is a minimal speed for consciousness.

But I agree that the point *should* be moot, as it's been made over and over.
 
But this is exactly what the computational model of consciousness asserts

Piggy has interpreted this study (http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000061) to refute--or at least provide evidence against-- the computational model of consciousness, so any argument assuming the CM's validity is a non-starter with him.

The funny thing is, his understanding of the Computational Model is so warped and riddled with lacunae that he's not even really arguing against a model anyone espouses.
 
Last edited:
Piggy has interpreted this study (http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000061) to refute--or at least provide evidence against-- the computational model of consciousness, so any argument assuming the CM's validity is a non-starter with him.

The funny thing is, his understanding of the Computational Model is so warped and riddled with lacunae that he's not even really arguing against a model anyone espouses.

How on Earth could any result from that study possibly refute any facet of the CM?
 
How on Earth could any result from that study possibly refute any facet of the CM?

Mind you, I don't find the argument remotely convincing, but I think it goes something like this:

Human consciousness is a "self-sustained reverberant state of coherent activity" (per the study)

The CM doesn't mention a "self-sustained reverberant state of coherent activity" (per CM)

Therefore the CM must be wrong. (per Piggy)

But I think what it really boils down to is this: Piggy is familiar with some neurology, and not familiar with the Computational Model, and therefore has chosen to espouse the former and reject the latter.
 
The CM doesn't mention a "self-sustained reverberant state of coherent activity" (per CM)

I have to make a funny face at that statement, since of anything that is exactly what the computational model is about.

As I have always said, this would make alot more sense to everyone if everyone were a computer scientist.
 
False.

Just about everyone has a three way switch somewhere in their home where a single light is controlled by two or more switches. What is the lower operating limit of those switches?
Wait 10,000 years and, assuming your house is still supplied with electricity, try turning the lights on.

Anyhow, I was stating a version of what I thought was Piggy's position rather than anything I think relates to consciousness.
 
Last edited:
I have to make a funny face at that statement, since of anything that is exactly what the computational model is about.

To be precise, I should not have quoted it. The CM does not specifically mention a "reverberant" state as it is described in the article. That's what Piggy's focused on: the reverberant state of coherent activity.

Ironically, what they describe in the article reminds me a lot of something called Delay Line Memory (http://en.wikipedia.org/wiki/Delay_line_memory), which is an early type of computer memory.
 
I actually didn't notice any rudeness from you (which means I don't notice my own, either...). I am enjoying this thread as it tests my thoughts on the subject.

Regarding the idea of consciousness as a data set....

The thing is, tho we use a thing-y term for it, consciousness isn't a thing, it is a function, a behavior, a process.

Here's an example.

Let's take the case of a person viewing a subliminal image of an apple, then a non-subliminal image of an apple.

In the former case, input is processed, associated with patterns in memory, and and stored in memory.

In the latter case, the same thing happens, but with the additional step that "I'm seeing an apple" is fed into the processors that generate conscious experience, and the viewer has the experience of seeing the apple.

Same data sets, but in the latter case an additional function is being performed.
 
I didn't mean "store everything", but should have said "attentional selection". The point I was trying to make is that the failure is in or before that selection, in not even providing the option for visual sensations to be selected for storage. But we are getting far off the track here.

Actually, there's a problem there.

We know from subliminal studies that the brain processes and stores information that's not made available to consciousness.

Storage is separate from consciousness, except that the memory of the conscious experience is also stored.
 
This article, which seems to be the keystone of your whole argument, only paints a picture of how consciousness happens in a human brain. It makes no claim, nor does it provide warrant for such a claim, that any and all forms of consciousness will have the features thus described. This is the same point we've been making over and over again. But if you want to simply define consciousness as "self-sustained reverberant state of coherent activity" then the whole discussion is moot. I mean, who's to argue with the latest research? Certainly not 50+ years of scientists and philosophers attending to the subject.

But here's your problem....

If you want to claim that our robot generates consciousness by some other method, you either have to describe what that method is -- which you can't do because no one knows of any other method -- or you have to propose a totally unknown method, at which point the question of what happens when the system slows down is unanswerable.

Either way, you can't get to the answer "Our robot will remain conscious".
 
On your contention that consciousness is not information processing:

Feel free to hammer away at the words "may be" in the quote. Or feel free to try and make the case that "mental content" is not information. I seriously and whole-heartedly doubt that even the researchers you cite would defend your interpretation.

Now you seem to be almost deliberately misunderstanding me.

I've already clarified this point.

I am not saying that we cannot call consciousness IP. We certainly can, and that is quite useful.

The problems I'm pointing out are (1) improper reasoning by analogy, thinking that categorizing consciousness as IP tells us more than it can, and (2) claiming that IP per se is what generates consciousness, when in fact it is a distinct physiological process which generates consciousness, and therefore in order to answer our question we must consider the limitations of the supporting hardware, i.e. the bio-physical mechanisms by which consciousness is produced.
 
See which do you disagree with:

1) We're given (in Paul's OP) that the conscious process is a running program.

2) A running program is composed of discrete steps.

3) The output of the program could include reports of its conscious experience.

4) As long as the input and step execution order doesn't change, the output of that program doesn't change.

5) We would get the same reports from it of its own conscious experience at any non-zero speed (if played to us at full speed).

Well, reports don't matter since we can program a non-conscious robot to report anything.

The problem here is what's missing.

No matter what sort of software is being used, there has to be hardware capable of properly implementing the software to get the desired result.

In this case, the desired result is not the result of a calculation or series of logical/symbolic steps. Rather, the desired result is a non-symbolic real-world phenomenon: actual conscious awareness.

(In other words, we're asking the machine to do something, to perform an action, not to produce a symbolic or logical or mathematical result.)

We have one sure model of how this is done: the human brain.

For the question to be answerable at all, we must assume that our machine produces the real-world phenomenon of consciousness in a manner analogous to the one used by the human brain. If we propose a completely unknown mechanism, the only answer to "What would happen if...?" is "Nobody knows".

In the brain, research clearly demonstrates that the generation of consciousness is a distinct function. (It is not something that merely emerges from overall qualities of the system, like the whiteness of clouds.) Furthermore, research strongly suggests that the mechanisms carrying out this function require a sustained real-time coordination of coherent activity across the brain.

Therefore, we have to ask what happens to our robot's consciousness when brain activity becomes very spread out over time. Will there be sufficient real-time coherence for conscious processing to ignite and maintain itself?

We cannot yet answer "yes" to this question with any certainty.

When I run the Glacial Axon Machine thought experiment in my head, I cannot come up with any point in the experiment when it seems conscious experience would ignite.

I might be wrong about that.

But there's no way we can assert that we know consciousness would be possible under those conditions.
 
Are we perhaps using different definitions for coherent?

I interpret this to be talking about physical coherency of maintaining the phase relationship of waves which can easily be handled at any clock rate above zero.

Maybe the misunderstanding is in one of the other big words you keep using. If you are tied only to the physical world, you would recognize that all physical systems have natural resonant frequencies. But in the computer world, such physical properties don't exist until they are created by simulation. The simulation is time independent and can be run at any speed (including stopped) up to the limit of the computer. The simulation may be more aesthetically pleasing to us when run at real time speed but that doesn't change the results of the simulation.

Sorry to use big words. I'll do my best to use smaller ones from here on out.

I'd like to hear more about maintaining the phase relationship of waves. Do you think that this implies that consciousness can be maintained if the neurons are firing extremely slowly?

A glacial consciousness would be mighty interesting.

But yes, I am talking about the physical world and not simulations. I consider simulations irrelevant to the OP.
 
If you want to claim that our robot generates consciousness by some other method, you either have to describe what that method is -- which you can't do because no one knows of any other method -- or you have to propose a totally unknown method, at which point the question of what happens when the system slows down is unanswerable.

You seem to be under the impression that the Computational Model and the "reverberant coherent state" model are mutually exclusive. They are not. The CM is a description of consciousness at a different level than the hardware level set forth in the study you cited.

For instance, the portion I quoted from the article talks about how consciousness has an object logically implies that consciousness is a form of information processing. The object is information, the processing is the coherent reverberant state.

As far as proposing methods, we've been doing that all along in this thread. The long and short of it is: it doesn't really matter as far as the question in the OP.

If you slow down some parts of the consciousness-generating system but not others, the system will most likely break, and your "what it's like" is "it's like not being conscious".

If you slow down all the physics of the consciousness-generating system but leave the inputs real-time, then your "what it's like" is probably a jumbled mess--conscious, but confused.

If you slow down the physics of the whole system, including the inputs, then then, from the point of view of the consciousness, nothing has changed, From the point of view of an outside observer, the whole thing is just running arbitrarily slowly.
 
But this is exactly what the computational model of consciousness asserts -- consciousness is a result of the logic and only the logic. I.E. we should be able to produce consciousness on any substrate capable of instantiating the necessary logical functions.

Consciousness should be possible in a system of buckets and pulleys, or monks and quills, and even in a city full of humans that have no idea they are instantiating the logical functions for a consciousness one level above them.

You've made an improper logical leap there.

There's no evidence that logic per se can generate the activity of consciousness. It's a claim that's not only unsupported, but downright bizarre, seeing as how we know of no other real-world phenomenon that can be generated ex nihilo by mere "logic".

Every other activity in the real world requires some sort of physical mechanism, and one that is physically appropriate to perform the action.

The substrate has to be able to do whatever mechanically needs to be done to make the action happen.

Comparing these things you mention with what the brain appears to do bio-mechanically when it generates conscious experience (what little we know, at least) they do not appear properly equipped to perform the task.
 
For some reason Piggy keeps stipulating that in slowing the system down we're only slowing part of the physics, but not all of it. Then when the system breaks, he can claim that there is a minimal speed for consciousness.

But I agree that the point *should* be moot, as it's been made over and over.

I understand that we're slowing down the operating speed.

We're not slowing down time, we're not slowing down all the atoms.

It's clear what the OP intends there.

If literally all the physics were slowed down, it would be mere time dilation, and there would be no question.
 
Piggy has interpreted this study (http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000061) to refute--or at least provide evidence against-- the computational model of consciousness, so any argument assuming the CM's validity is a non-starter with him.

The funny thing is, his understanding of the Computational Model is so warped and riddled with lacunae that he's not even really arguing against a model anyone espouses.

Give me a nutshell version, then.

If this model insists that consciousness arises as a result of the logic alone, then you bet I reject it, as it is absurd on its face, and defies everything we know about the world in general and the brain in particular.

However, keep in mind that I'm not arguing against the properties of TMs, and I'm not arguing that artificial consciousness is impossible.
 

Back
Top Bottom