• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

You claim that a computer running a movie at an arbitrarily slow speed is not really "playing a movie"--presumably because it wouldn't make sense to you while you watched it. But this does not demonstrate an inadequacy in the computer. It only points out a mismatch between the computer's operating speed and your brain's operating speed. It's exactly the same mismatch that would exist if you were to slow down the brain's (any brain's) operations compared to the outside world's speed. But it doesn't follow that the your brain is not conscious at that speed, merely that it wouldn't be able to make much sense of the inputs.

No, that's not my point.

I think drkitten's words were particularly telling when he responded that "there are no high level outputs".

When we look at a Turing Machine and say that it produces the same outputs regardless of operating speed, we're actually saying something quite trivial when it comes to consciousness.

Why? Because the generation of conscious experience is not a calculation, it is a process. It is a physical feat peformed by the brain.

If you slow down the operating speed of a computer, it won't play the movie because it can't make the laser work to read the DVD.

When we look at the difference between perception of subliminal events and events that are consciously perceived, we see real-time physical activity coordinating the latter across the brain which does not occur in the former case.

If we want to understand -- or even perceive -- the difference between these two cases, we have to view the brain as the physical organ that it is, and not as an abstracted set of logical relations.

If we do the latter, we're likely to erroneously conclude that subliminal stimuli will be perceived the same as non-subliminal stimuli.

In fact, all the available evidence indicates that the brain doesn't even operate logically.

The production of consciousness is just that -- production. It's not logic. It's a physical process.

Now don't get me wrong. I see no reason why we won't someday be able to produce artificial consciousness. And there's already evidence that the "rate" of consciousness, so to speak, is variable.

So I have no objection to the idea that machines can be conscious, or that consciousness can operate more slowly or more quickly relative to real time.

And actually, I think it would be really cool if machines could be conscious at an extremely slow rate, with one conscious moment spanning years of real time.

But when we say that TMs can produce their outputs regardless of operating speed, and that a brain can't be more powerful than a TM, we cannot actually conclude from this that any and every activity of the brain cannot be dependent on operating speed.

That's because in the first statement "outputs" are defined at the smallest level of granularity, while in the second statement "powerful" is defined at the widest possible level of granularity.

Yes, a TM will produce the same outputs regardless of whether you pause a nanosecond or a century between calculations. But so what?

The generation of consciousness is a physical process operating at a much higher level of organization.

I can't see that the trivial concept that calculations don't depend on speed is relevant at all.
 
You've made the claim repeatedly that information processing is a metaphor. But a metaphor has two parts. What's the other part? I ask because I think you are totally misusing the term "metaphor" and it's leading to fuzzy thinking. I would call "information processing" a description of what the brain does.

When you say that the brain doesn't do information processing, and what it really does is fire neurons and shuttle neurotransmitters around, then you are being deliberately myopic. It's the same as if you said that your body doesn't engage in something called "living", it merely pumps blood and contracts muscles.

We can describe what the brain does as "information processing", and that's accurate (if not precise). But we need to understand that we're engaging in a certain level of abstraction when we do this.

It is an error to mistake this abstraction for physical reality -- for example, by asserting that "information processing" is capable of generating a real-world instantiation of consciousness.

If you believe that "information processing" generates consciousness, then you're asserting that an abstraction produces actual physical phenomena. It would be like me saying that "living" is what causes my body to function.
 
There is no input/output mapping (or stimulus/response pair) that the brain can produce that cannot be computed by an appropriately programmed Turing machine.

Ok. So how does this demonstrate that the process of generating conscious awareness can be maintained at any operating speed?

After all, when we're talking about the process of maintaining consciousness, we're not talking about the result of a calculation.
 
If we were subjected to flashed images over and over, we would start to recognize them as the flash itself forms its own event pattern in the brain.

Ooh, that's interesting.

Has no one ever done that experiment?

Would be hard to get subjects, tho.
 
Again, true, but that's still not answering my question. Can you give me an example of something in particular that you label as an experience, but without recalling it? I don't limit "recall" to long-term storage, but include even working storage, such as recalling the sensation you had a split-second ago.

When I say that recollection operates indpendently of conscious and non-conscious processing, I mean just that.

Let's take the example of subliminal and non-subliminal exposure.

In each case, the brain processes the input by matching it against stored patterns, and storing the new experiences into memory.

In one case, a mixture of input and stored patterns is made available to the functions that generate conscious experience, and in the other case it is not.

The brain cannot function without recollection. But conscious experience is not recollection.
 
We could use the Pixxy definition...

Consciousness: Something a Pixxy has that makes pixxies special.


Of course, pixxies are special because they have DVDs spinning in their head and laser beams protruding from their eyes. But that definition doesn't help us understand what consciousness is or help us recognize consciousness in anything other than a pixxy.

I take that as a swipe at me.

Care to elaborate?

I don't mind. I'm happy to have my errors exposed -- how else can anyone learn anything?

But I'd rather that you be more direct if you want to object to what I'm saying.
 
Are you sure?

Let's do the same thing for my desktop machine here. We'll take the same inputs and run the machine at normal operating speed, and then again at an extremely slow operating speed.

If the task is purely logical -- say, adding numbers -- then I expect we'll get the same outputs.

But what if the task is to play a DVD? Will we then get the same outputs?

Obviously no, because in that case time increments make a difference.

This scenario begs the question because it is based on the assumption that no high-level physical tasks that feed back into the system are involved.

Once you have that kind of setup, then everything changes.

I don't see why this is begging the question (at least not any question in the OP). If you want to propose that any part of consciousness is outside the robot, then there must be inputs from that into the robot.

As I mentioned before it's possible to add dynamic components to this model that break it, such as an AC-coupled connection in the circuit, but then it's no longer the single-steppable computer the OP is requiring.

What's clear is that the task "generate and maintain conscious experience" is quite dissimilar to the task "respond reflexively to stimulus".

Again, the real-world instantiation of consciousness is not a calculation. It is not an execution of logic. Rather, it is a performance, a physical process, and one which feeds back into the system and alters it.

The representation you propose appears too simple to capture the actual dynamics.
If the system you're referring to is the brain, then yes, it normally has feedback from the environment that affects it, but there is also plenty of conscious feedback entirely internal to the brain: imagination is the obvious example. So we can't say all consciousness is dependent on the environment.

Single-stepping the robot's program is still a performance, still a physical process, and still one that feeds back on the system and alters it.
 
When I say that recollection operates indpendently of conscious and non-conscious processing, I mean just that.

Let's take the example of subliminal and non-subliminal exposure.

In each case, the brain processes the input by matching it against stored patterns, and storing the new experiences into memory.

In one case, a mixture of input and stored patterns is made available to the functions that generate conscious experience, and in the other case it is not.

The brain cannot function without recollection. But conscious experience is not recollection.
Subliminal messages are one of those borderline cases that we don't know which way to label. I should condition my statement in that I'm talking about the normal, clear-cut case. If you pick something that you clearly label as conscious, it will be a recollection of input or internal feedback that was learned earlier. A sort of event that you clearly can't recall will be one that you would say you were never conscious of.
 
I don't see why this is begging the question (at least not any question in the OP). If you want to propose that any part of consciousness is outside the robot, then there must be inputs from that into the robot.

As I mentioned before it's possible to add dynamic components to this model that break it, such as an AC-coupled connection in the circuit, but then it's no longer the single-steppable computer the OP is requiring.

The OP is only requiring a conscious robot.

Since we don't fully understand how consciousness is produced, we can't introduce assumptions on that count.

We have one certain example of a conscious machine, and that is the human brain. It may be that there are other ways of producing consciousness, but if we posit that the robot is conscious by any other method, which would be entirely unknown, then the question of what happens when processing is slowed becomes unanswerable.

So the question is only answerable if we suppose a robot that uses the same means to produce consciousness that the human brain does.

If that assumption results in a machine that is not a single-steppable computer, then we have to answer that consciousness won't be maintained.
 
A sort of event that you clearly can't recall will be one that you would say you were never conscious of.

I must be misunderstanding you. Surely you're not saying that anything I can't recall is something I was never aware of.

In any case, I thought we were discussing the difference between recollection and consciousness.
 
The OP is only requiring a conscious robot.
Not only that, but one running conscious software: "Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains."

Any computer running software is going to be TM equivalent and thus fully synchronous and single-steppable. Even if we add asynchronous components they wouldn't be part of the programmable subset of the machine, and thus not a component of the conscious part.
 
I must be misunderstanding you. Surely you're not saying that anything I can't recall is something I was never aware of.

In any case, I thought we were discussing the difference between recollection and consciousness.

That's why I wrote "any sort of event" instead of "any event", since you can also forget some events of a particular sort (a type, a class). This is part of showing the lack of a difference between what sort of events can be recalled and conscious experience, not every aspect of consciousness.
 
This is part of showing the lack of a difference between what sort of events can be recalled and conscious experience, not every aspect of consciousness.

I'm afraid I'm going to have to ask you back up and restate your idea here, because I'm lost. That sentence seems to be saying conscious experience, on the one hand, and a category of retrievable information, on the other, are equivalent.
 
Perhaps this sums up my take on the issue of TM outputs and the generation of consciousness.

Consciousness is a behavior, not an output.

Specifically, the most recent research suggests that it is a "self-sustained reverberant state of coherent activity".

As such, it does not resemble anything like the output of a calculation or logical manipulation, and even though we can describe the low-level activity of the brain in terms of a series of inputs and outputs, and we can observe that the series should not change simply because we lengthen the time interval between these inputs and outputs, we have to recognize that high-level behavior is transparent to such a model.

That's why I wrote about loss of cohesion when I first came on this thread. It is this cohesion in real time that is lost when we view brain activity in a way that restricts our view to low-level activity. We simply write it out of the picture.

To show how high-level behavior can become transparent to such models more clearly, let's look at a simpler organ, the heart.

We can map the heart according to inputs and outputs also, tracing the chain of actions at the cell level which propagate across the network with the high-level behavioral result of "pumping blood through the body".

But it's possible for those same inputs to propagate across the same network in different patterns, which results in fibrillation. Once the heart is in one state or another, it tends to remain there unless "knocked" out of that state and into a different one.

In both cases, we have an identical network and an identical set of inputs, yet the high-level behavior of the organ is drastically different.

Current research demonstrates that consciousness is a high-level behavior, marked by (among other things), "sustained voltage changes, particularly in prefrontal cortex, large increases in spectral power in the gamma band, (and) increases in long-distance phase synchrony in the beta range" [emphasis mine]. The researchers who performed the cited study recommend abandoning the search for a "neural correlate of consciousness", describing it instead as a "brain-scale distributed pattern of coherent brain activation".

So even though we can model the brain logically or mathematically, and within that model we can observe that changing the time interval between steps should not change the outputs of those steps, we cannot conclude on this basis that consciousness -- a high-level behavior, a "self-sustained reverberant state of coherent activity" in real time -- could be maintained if the real-time activity of the brain were somehow slowed dramatically.

In order to answer that question, we need to look at high-level activities.

So when we consider Joe in the blackout room under the influence of the Glacial Axon Machine, it doesn't really help us to know that his neurons are still firing in the same sequence as before. That fact alone can't answer our question about whether his brain can, under those conditions, "ignite" (to use the researchers' term) the behavior of conscious experience and sustain the global coherent activity in real time which is necessary in order for Joe to have a conscious experience of seeing the green triangle.

Right now, I can't answer the question of whether or not the brain can do that.

But if anyone can, it will have to be with reference to high-level behavior, not low-level outputs.
 
Perhaps this sums up my take on the issue of TM outputs and the generation of consciousness.

Consciousness is a behavior, not an output.
How will you determine this behaviour without any output?

My first experience with a computer was sitting at the front panel of a PDP-8 "mini", setting and reading bits manually. This is no different from hooking up a brain with wires and picking up a signal in that way.

Specifically, the most recent research suggests that it is a "self-sustained reverberant state of coherent activity".
It sounds just like what we would expect from a TM.

As such, it does not resemble anything like the output of a calculation or logical manipulation
Why?
 
I take that as a swipe at me.

Care to elaborate?


I was just perplexed by your attribution of consciousness as unique to the human mind without having a working definition of what consciousness is.

I summarized a definition of consciousness based on your view and applied it to a Pixy in one statement to show how ludicrous the position is.
 
Consciousness is a behavior, not an output.

Given what us "TM-ers" mean when we talk about behaviors or outputs, then a behavior *IS* in fact an output.

A stimulus is an input...

...upon which some type of information processing occurs, to produce...

...a response, which is an output.

If the response happens to be physical movement (intentional or reflexive) then the response can rightly be called a behavior. All behaviors are outputs, even if not all outputs are behaviors.
 
I would say the the behavior of a system was characterized by its response to stimulus.

You analyze a behavior by viewing the output but without also knowing or controlling the input you won't get far.
 
I was just perplexed by your attribution of consciousness as unique to the human mind without having a working definition of what consciousness is.

I summarized a definition of consciousness based on your view and applied it to a Pixy in one statement to show how ludicrous the position is.

I'm sorry, then, I must have not been clear about my position.

I don't believe consciousness is unique to the human mind.

I'd bet anything that dogs are conscious, for example, and that it's possible to create conscious machines.

But for the purposes of the question in the OP, we have to assume an artificial brain that produces consciousness by methods similar to those (on a high level) employed by the human brain, b/c it's the only thing we can be absolutely certain does produce consciousness.

If we propose that our robot has a brain that produces consciousness in some other way, the question becomes unanswerable, b/c if the means are unknown, it is impossible to even hypothesize about the effects of any changes to the system.
 
How will you determine this behaviour without any output?

I'm not saying that you do.

Here I'm specifically discussing whether we can conclude that operating speed has no effect on the generation and maintenance of consciousness -- a high level behavior -- based on the fact that changes in operating speed do not change the outcomes of low level reactions (or calculations or logical manipulations or what have you).

Of course, if slowing down the operating speed did yield different logical / mathematical / neuronal outputs at a low level, that would put a wrench in it right there.

But when we're asking about a high level behavior -- conscious awareness -- we don't have sufficient information to answer the question simply because we can assume that low-level input/output is not altered.

As I've noted, the most recent research indicates that conscious experience has an "ignition" point and requires a "sustained reverberant state of coherent activity" (to use the researchers' term).

So even though we get the same input/output on a low level if the operating speed is slowed down relative to real time, the question remains whether these low-level outputs, if spaced very far apart in time, will allow such a coherent state to be sustained.

I don't know if we have enough information to answer that. My feeling is that, if we slowed it down to the speed proposed in the Glacial Axon Machine thought experiment, real-time cohesion would degrade to the point that consciousness would not be possible.

I can't prove that, tho. But knowing the low-level input/output doesn't change is pretty much irrelevant to the question.
 
Last edited:

Back
Top Bottom