• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

So a disembodied consciousness would not be possible? Keep a brain alive in a jar and it wouldn't be conscious? Clearly it would be a disordered mess if it developed without such interactions, but knock someone over the head, steal their brain, put it in a jar...


If this could be done, I would expect that any kind of "consciousness" available to the jar brain would be like the "consciousness" available to a normal human during dream state.


roger said:
I started this whole sleep thing, you all took it on a tangent I didn't intend. I was initially going to say brain dead, but current medicine hasn't been able to bring somebody back from brain death, and so I was concerned somebody would say "but once your brain dead you are dead". The point was simple, and rhetorical - Piggy's assertions nonwithstanding, there is absolutely nothing we know that states that if we stopped a brain for awhile, then started it back up, that consciousness would not return. Sleep was an easy thing to reach for, though of course in sleep's case many of the lower level functions continue.

We have learned much about the nature of consciousness by studying people in the states of coma, persistent vegetative, and locked-in syndrome.
 
There comes a point where you have to call foolishness foolishness.

And no, neuroscience does not proceed with the assumption that the physical activity of the brain is akin to the physical activity of moving a pencil across a sheet.
Well, inform the scientists at the Krasnow Institute (and many others), and inform the journals they publish in. As it happens, I know some of these people, have discussed neuroscience and their research, and yes, they do regard neurons as a computational activity.

But we're not talking about computation. We're talking about a machine that can generate the phenomenon of conscious awareness.

Maybe it can be done with cogs, I dunno.

Doesn't matter, really.
I thought you were arguing that consciousness arises from the patterns and reactions of neurons and modules, and not from anything extra or special. That's computation.

So please explain to me how you extrapolate from computation and the behavior of individual neurons in petri dishes to the conclusion that the brain can produce conscious awareness at the rate of one impulse per second.
I see that I can't do that in a way that you'll agree, since you apparently don't think that what neurons do is 'computation'. I'll point out that neuroscientists as Krasnow et al are using computational models to produce extremely precise models of neural behavior. it's not perfect, but there is nothing in the nonperfection that suggests that component is non-computational. Seriously, and I don't mean this as a slight, I don't think you understand computational science, because what you have said about it is inconsistant.

However, assuming you (or somebody) accepts neurons and the modules they create are computational (and that is just bog standard in the neuroscience world), the argument is quite straightforward:

First of all, the definition of computation does not include rate of timing. Computation is the same whether or not you are running at 1 cyle/eon or 10^16 cyles per second. It's just faster or slower. Where you seem to be getting stuck is you keep bringin up timing of signals. Yes, of course signals need to be coordinated, but that is exactly what we are postulating - all signals are slowed down by the same rate. In any case, no where in the theory of computation does rate of timing come up. I point you to the canonical publishing of Turing on this point.

Second, on the neuron front. We have identified nothing in a neurons behavior that is not computational. The fact that we can simulate it proves it is computational. This is such a basic point that I think you must have some weird definition of 'computation' that is not actually used in information science. To be clear, by the definition the rest of us are using, a lever is computational. A set of equations is computational. An algorithm on a computer is also computational. "Computational" has nothing to do with silicon chips or computers, except that in practice computers sure do computations quick. But neurons do computations too.

Anyway, a single neuron in a petri dish responds to inputs as they come. If you send chemical inputs to it as fast as in the brain, it responds just as fast as if it was in the brain. If you take 10 minutes between signals, well, once every ten minutes it'll fire. There's no time element in how it responds, essentially. So, if you were to take a brain, put each neuron in a separate petri dish, and set up some kind of network so all the chemical paths were perserved, it'd still work. For the moment assume you make all the signals run just as fast as in the brain, even though they are further apart (say by using electrical interfaces to get the speed up to the SOL. or imagine they are tiny little petri dishes so the brain is still the same size. Doesn't matter.

So, if you slow things down by 1%, it's still going to work, just slower. And when I say slow things down, I mean everything. The inputs, the connections, how fast the neuron reacts to signals - everything. Obviously if you slowed down only some things the timings would be all messed up and the brain would stop working.

And there is nothing in the world that says if you only let the signals propagate once a minute that anything would be different. It'd still work, just at a much slower pace. And again, I'm not saying consciousness would be there while the neurons are not firing, just that the sum total would still be consciousness.

You keep saying "foolishness" but give no reason why. The time scale we exist on is arbitrary, based on the speeds of chemical reactions in the brain. You just happen to consider that the "right" speed, for some reason.

Explain how that hardware can produce consciousness.
The same way that neurons produce consciousness.

There is no point in continuing this, you are maintaining a dualist position while insisting you are not dualist. You're not just a dualist about the brain, but about computation, where somehow silicon is privledged in regards to computation.

Go ahead, call me insane again, instead of tackling the arguments. :rolleyes:
 
It has nothing to do with dualism.

When you look at the brain and consider what is known about consciousness, then you look at the physical act of moving a pencil on paper, it's clear that the latter is not sufficient to do what the former is doing when it creates consciousness.

It's that simple.
Argument from personal incredulity.

I suggest you write this up and submit it to Journal of Neuroscience. It'll be groundbreaking, probably worthy of a Nobel (I'm serious, if only you could back it up with facts instead of assertions).
 
Regarding one impulse per second:

At that rate of input, none of the things that are needed for consciousness can be happening. From what we can tell, in the brain, simultaneous associations of many types of information have to be made for this effect to occur. That requires a pretty heavy real-time data stream -- which is why the brain uses so much of the body's resources to maintain it.
Okay, clearly you haven't studied information science. No insult, there's a million things I haven't studied. The only problem is, you think you understand it. You don't.

There is nothing a parallel process can do that a serial process can't do. They are computationally equivalent.

You are making elementary mistakes about the field, and then hurling insults. This reads like those physics threads where that guy lectures people like Sol about relativity without knowing basic algebra. That is not good company to be in.
 
Well, inform the scientists at the Krasnow Institute (and many others), and inform the journals they publish in. As it happens, I know some of these people, have discussed neuroscience and their research, and yes, they do regard neurons as a computational activity.

That's not the same thing. The conflation is yours.

What I'm saying is that the physical activity of the brain is not akin to the physical activity of a pencil moving across paper.

This pencil brain and pencil thoughts is nonsense.

You might as well say that daisies swaying in the breeze make a daisy brain with daisy thoughts, or a waterbuffalo farting in a lake makes a fart brain with fart thoughts.

The computation is going on in your head.
 
I thought you were arguing that consciousness arises from the patterns and reactions of neurons and modules, and not from anything extra or special. That's computation.

Consciousness arises from the physical activity of the brain.
 
I see that I can't do that in a way that you'll agree, since you apparently don't think that what neurons do is 'computation'. I'll point out that neuroscientists as Krasnow et al are using computational models to produce extremely precise models of neural behavior. it's not perfect, but there is nothing in the nonperfection that suggests that component is non-computational. Seriously, and I don't mean this as a slight, I don't think you understand computational science, because what you have said about it is inconsistant.

However, assuming you (or somebody) accepts neurons and the modules they create are computational (and that is just bog standard in the neuroscience world), the argument is quite straightforward:

First of all, the definition of computation does not include rate of timing. Computation is the same whether or not you are running at 1 cyle/eon or 10^16 cyles per second. It's just faster or slower. Where you seem to be getting stuck is you keep bringin up timing of signals. Yes, of course signals need to be coordinated, but that is exactly what we are postulating - all signals are slowed down by the same rate. In any case, no where in the theory of computation does rate of timing come up. I point you to the canonical publishing of Turing on this point.

Second, on the neuron front. We have identified nothing in a neurons behavior that is not computational. The fact that we can simulate it proves it is computational. This is such a basic point that I think you must have some weird definition of 'computation' that is not actually used in information science. To be clear, by the definition the rest of us are using, a lever is computational. A set of equations is computational. An algorithm on a computer is also computational. "Computational" has nothing to do with silicon chips or computers, except that in practice computers sure do computations quick. But neurons do computations too.

Anyway, a single neuron in a petri dish responds to inputs as they come. If you send chemical inputs to it as fast as in the brain, it responds just as fast as if it was in the brain. If you take 10 minutes between signals, well, once every ten minutes it'll fire. There's no time element in how it responds, essentially. So, if you were to take a brain, put each neuron in a separate petri dish, and set up some kind of network so all the chemical paths were perserved, it'd still work. For the moment assume you make all the signals run just as fast as in the brain, even though they are further apart (say by using electrical interfaces to get the speed up to the SOL. or imagine they are tiny little petri dishes so the brain is still the same size. Doesn't matter.

So, if you slow things down by 1%, it's still going to work, just slower. And when I say slow things down, I mean everything. The inputs, the connections, how fast the neuron reacts to signals - everything. Obviously if you slowed down only some things the timings would be all messed up and the brain would stop working.

And there is nothing in the world that says if you only let the signals propagate once a minute that anything would be different. It'd still work, just at a much slower pace. And again, I'm not saying consciousness would be there while the neurons are not firing, just that the sum total would still be consciousness.

You keep saying "foolishness" but give no reason why. The time scale we exist on is arbitrary, based on the speeds of chemical reactions in the brain. You just happen to consider that the "right" speed, for some reason.

What is strikingly omitted from all that is an explanation of how consciousness is generated.

As I've pointed out, consciousness does not occur on the neuronal level. Neuronal activity is invisible to consciousness.

Consciousness is created at a higher level of organization, through the coordination of highly processed and aggregated information.

I have no problem with what you're saying about neurons and computation.

But you've failed to address the question.
 
This is pointless. You are arguing against strawmen.

Look, you are one of my favorite posters here. But you don't understand computation (and I'm talking computation, not brains or neurons or anything like that). Read Turing, who pretty much invented the field, and then some of the later stuff. You are making fundamental mistakes, and ascribing arguments to us that we don't hold. I'll happily engage with you on any other point, but there is no point in continuing this discussion with you.
 
Consciousness is created at a higher level of organization, through the coordination of highly processed and aggregated information.
I absolutely agree.

That's still computational. That's still hardware independent.
 
Before we all go our separate ways I have, what I think is, a sensible question. Who says what the states of the paper, or robot brain represent? I mean, at some point they turn from ones and zeros to a representation of a bird, or a tree. Are we saying that there is one unique solution to what a large array of moving ones and zeros represents?
 
The same way that neurons produce consciousness.

There is no point in continuing this, you are maintaining a dualist position while insisting you are not dualist. You're not just a dualist about the brain, but about computation, where somehow silicon is privledged in regards to computation.

Go ahead, call me insane again, instead of tackling the arguments. :rolleyes:

There is absolutely nothing dualist in what I'm saying.

The problem is that there are key characteristics of a conscious brain which you are not addressing when you discuss computation.

Let's take Marvin again as an example.

Start with pre-stroke Marvin.

The parts of his brain which handle conscious awareness of emotion receive fairly large streams of data regarding the activity of other parts of the brain as well as other parts of his body, which has already been highly processed. That data is then coordinated and re-processed and new impulses are fed back into the system.

One of the results of that processing is that Marvin "feels" happy or sad or angry or afraid etc.

Post-stroke Marvin:

With a key neural channel destroyed, Marvin acts out his emotions, but downstream processing in the emotional centers is hobbled, so Marvin isn't consciously aware of his emotions. He sees something surprising and laughs, but doesn't feel the emotions we feel when we laugh.

Why bring this up?

It's just one example to show that, in reality, in the one working object we know of that actually makes consciousness, we're dealing with the coordination of large amounts of data.

This is also shown when we look at errors in consciousness, such as illusions.

The modules that "do" consciousness receive highly processed and associated data.

Consciousness does not arise as a result of mere computation, although it depends on it.

So...

When we consider the question of how slow the inputs can be, we have to consider the coherence of this processed information.

At one impulse a second, you don't have the kind of coherent, associated information that we know is involved in conscious awareness.

At what point does the quality of the information degrade so far that we can't have consciousness?

Who knows?

I doubt there's a single point.

But single-stepping the process would certainly kill it, because then large-scale coherence would necessarily be lost.

No one can say how slow the system could go before the brain starts doing a HAL.

But I don't see how you can argue that consciousness can be maintained without macro-scale data coherence.
 
Before we all go our separate ways I have, what I think is, a sensible question. Who says what the states of the paper, or robot brain represent? I mean, at some point they turn from ones and zeros to a representation of a bird, or a tree. Are we saying that there is one unique solution to what a large array of moving ones and zeros represents?

If we are not talking about a "paper universe" or "digital universe", then at some point the ones and zeros will translate into some sort of output to the "real world", and then why would the internal state matter?

If we are talking about a "paper universe" or "digital universe", then who says what the state of our universe represents?
 
I absolutely agree.

That's still computational. That's still hardware independent.

It is, as long as you have hardware that can maintain that organization and transmit that highly organized and aggregated information in real time.
 
This is pointless. You are arguing against strawmen.

Look, you are one of my favorite posters here. But you don't understand computation (and I'm talking computation, not brains or neurons or anything like that). Read Turing, who pretty much invented the field, and then some of the later stuff. You are making fundamental mistakes, and ascribing arguments to us that we don't hold. I'll happily engage with you on any other point, but there is no point in continuing this discussion with you.

Oh no, I'm not arguing a strawman.

And I'm not disagreeing with you regarding neurons and computation. I'm absolutely fine with that.

However, if we're going to talk about consciousness specifically, then we need to consider higher levels of organization.

Consciousness depends on computation, of course.

But we can't stop there.

We have to consider the coordination of large chunks of aggregated data.

If we have a robot brain that is conscious, we must assume it produces consciousness the same way the human brain does, but using a different sort of computational circuitry.

(Otherwise, a different means of producing consciousness will have to be explained, and no one's offering that.)

So when we consider how slow we can go, it's not enough to look at neurons and computation -- that is my point.

We are obliged to consider the real-time coherence of coordinated, aggregated data.
 
Before we all go our separate ways I have, what I think is, a sensible question. Who says what the states of the paper, or robot brain represent? I mean, at some point they turn from ones and zeros to a representation of a bird, or a tree. Are we saying that there is one unique solution to what a large array of moving ones and zeros represents?
The states say what the states represent.

The brain is a complex, self-referential structure. It's not there are chemicals and neurons doing there thing, and then some other thing is looking at those reactions and states and saying "oh, that means i think the tree is pretty". Instead, the different systems in the brain are interacting. One module is creating an image of a tree, another is firing off a signal to create some hormones that cause us to feel happy, another is firing the 'awe' hormones, another is sending signals to the heart to slow it down. Meanwhile, more modules sense those things happening, giving you the awareness that you are happy, feel awe, that you are calming down, etc. And you sit under the tree saying "it's so peaceful here, I love coming to the field and lying under a tree".

I adore the book Consciousness Explained by Daniel Dennett. In it he undertakes explaining consciousness as the result of the interactions between different modules in the brain, much like Piggy describes it, btw. It's not intended to be an accurate description, as we don't have the science of how everything is arranged and behaves yet, but merely intended to be representive - this is one way a brain much like ours could become consciousness.

And naturally, there is no special 'brain stuff' in this description. All that is required is a complex, self referential system with certain features. It could even be a pencil/paper, so long as it follows the same rules as the brain! (Piggy's head explodes :))
 
So when we consider how slow we can go, it's not enough to look at neurons and computation -- that is my point.

We are obliged to consider the real-time coherence of coordinated, aggregated data.

What does that mean? Are you concerned about response time?

A human who is slowed down by a factor of a billion can't communicate with me, but can communicate with another who is similarly slowed down.
 
So when we consider how slow we can go, it's not enough to look at neurons and computation -- that is my point.

We are obliged to consider the real-time coherence of coordinated, aggregated data.
Absolutely. Who has argued otherwise? The cog brain, the silicon brain, even the pencil brain, all are required to have real-time coherence of coordinated, aggregated data. Your bison fart or whatever it was was a strawman, because we not not postulating just any old pencil strokes turning into a brain. Farts will never think. 2+2=4 on a piece of paper will never think. A very complex, self-referential, highly coordinated system reacting to inputs in real time is required. The form that system takes is irrelevant, so long as those conditions are met.

And, as I've said many times, the whole idea behind the OP is that not just that the brain is slowed down, but so are the inputs. Slowing down in that context does not change the coordination, it does not change the 'real time' nature of the system.

I'm at a loss as to make it any clearer.
Roger: the system will be exactly the same, just slower
piggy: but your system won't be coordinated
roger: yes it will, that's my postulate
piggy: but you aren't taking into account that it has to be coordinated
roger: it will be! Really! Inputs are slowed down to match the speed of the neurons, and the organization of the neurons and higher level modules will be exactly the same. That's what being postulated.
piggy: how foolish to think lack of coordination and no structure will lead to consciousness

Where can I possibly go from here? It's like I keep saying "I'm atheist" and you respond "since you foolishly believe in God..."
 
Take two brains, accelerate them so that they are travelling at nearly the speed of light relative to each other, so that the rate of time for one is 1/trillionth the rate of time for the other. (if you don't like trillionth, make it 1/googleplex, or whatever floats your boat)

According to some in this thread, that brain will no longer be conscious.
 
What does that mean? Are you concerned about response time?

A human who is slowed down by a factor of a billion can't communicate with me, but can communicate with another who is similarly slowed down.

No, I'm concerned about what happens when the firing of neurons slows to a level where large-scale data coherence can't be sufficiently maintained.
 

Back
Top Bottom