Robot consciousness

The states say what the states represent.
Do they? I realize that it's a silly question, but what does 101110101000101010010.... represent? Depending on the encoding it might mean anything.

The brain is a complex, self-referential structure. It's not there are chemicals and neurons doing there thing, and then some other thing is looking at those reactions and states and saying "oh, that means i think the tree is pretty".
Clearly

Instead, the different systems in the brain are interacting. One module is creating an image of a tree, another is firing off a signal to create some hormones that cause us to feel happy, another is firing the 'awe' hormones, another is sending signals to the heart to slow it down. Meanwhile, more modules sense those things happening, giving you the awareness that you are happy, feel awe, that you are calming down, etc. And you sit under the tree saying "it's so peaceful here, I love coming to the field and lying under a tree".
This looks to me like a round about route to saying there is only one interpretation of the meaning of all the 1's and 0's.

I adore the book Consciousness Explained by Daniel Dennett.
Sitting on my shelf, on the queue.
 
Absolutely. Who has argued otherwise? The cog brain, the silicon brain, even the pencil brain, all are required to have real-time coherence of coordinated, aggregated data. Your bison fart or whatever it was was a strawman, because we not not postulating just any old pencil strokes turning into a brain. Farts will never think. 2+2=4 on a piece of paper will never think. A very complex, self-referential, highly coordinated system reacting to inputs in real time is required. The form that system takes is irrelevant, so long as those conditions are met.

Ok, well, I'd like to come to a meeting of the minds here, because I have a hard time understanding how you can be saying what you seem to be saying.

Cog brain, silicone brain, ok.

I'm not certain cogs would work, but I won't discount them.

For convenience, let's consider a wired brain.

With a wired brain, you can actually move information around as in a human brain. You can set up the modules, aggregate the data, route it around.

So maybe we can get a conscious brain from that.

A pencil, however, can't actually do any of that. Not in reality.

In the physically real world, which is where consciousness arises, a pencil moving across a sheet of paper does only that -- moves across the paper.

So the comparison to swaying daisies or waterbuffalo farts is apt. They can't generate consciousness, either.

Of course, the movement of the pencil can indeed draw symbols which helps a human brain to trace, symbolically, streams of data by performing its own computations. But it's not going on in the pencil, as it is in the wired brain.

In fact, it's not going on in the mind either -- not physically. There are physical analogs to that activity, but it's not really happening objectively.

So that process cannot actually create a new brain of any kind or a new instance of consciousness.

Are you really saying that it can?

I certainly hope you're not.

Because if you are, you're ignoring the rather salient fact that, as far as we know, only actual physical activity (whether electrical or chemical) in the physical world generates consciousness. Symbolic activity in our heads does not.

And in the pen and paper scenario, the only physical activity is a pencil moving around. Everything else is symbolic activity in our heads.


And, as I've said many times, the whole idea behind the OP is that not just that the brain is slowed down, but so are the inputs. Slowing down in that context does not change the coordination, it does not change the 'real time' nature of the system.

I know, I get what you're saying.

The question is, do we actually have the kind of data coherence that's needed for a physical brain to do the physical work that's required for a real being to actually maintain consciousness in reality?
 
Take two brains, accelerate them so that they are travelling at nearly the speed of light relative to each other, so that the rate of time for one is 1/trillionth the rate of time for the other. (if you don't like trillionth, make it 1/googleplex, or whatever floats your boat)

According to some in this thread, that brain will no longer be conscious.

How would the relative rates of time here change anything?
 
Do they? I realize that it's a silly question, but what does 101110101000101010010.... represent? Depending on the encoding it might mean anything.
Right. But the 'tree recognizing' structure in your brain will light up when it sees that, and only that sequence. To it, it means 'roger sees a tree'. It could just happen that that same pattern presented to another module would mean "my left pinkie toe hurts". Sure, chances are huge that some other pattern represents that particular sensation, but in a sense it is all arbitrary.



This looks to me like a round about route to saying there is only one interpretation of the meaning of all the 1's and 0's.
No, just what I said above. To take it back to computers, a two's compliment computer and a BCD computer represent -34 with completely different 0/1 combinations. It's all in how the modules interpret the signals presented to it.


Sitting on my shelf, on the queue.[/quote]
 
No, I'm concerned about what happens when the firing of neurons slows to a level where large-scale data coherence can't be sufficiently maintained.


You can learn about what happens to a person with a normal brain when typical input is stopped. See the experiences of those subjects in sensory deprivation experiments. Even when fully conscious (awake), the experience is very much like dream state.
 
Ok, well, I'd like to come to a meeting of the minds here, because I have a hard time understanding how you can be saying what you seem to be saying.

Cog brain, silicone brain, ok.

I'm not certain cogs would work, but I won't discount them.
Well, let's stop here for a moment, because it is absolutely fundamental.

Bear with me. It's vital to understanding this.

A Turing machine is (almost*) nothing more than a paper tape, a motor that can either move that tape one step forward or one step back, and four basic instructions: move tape forward, move tape backwards, write a 1, read the tape at the current position.

That's it. Turing proved that this simple setup can compute anything computable. Furthermore, if something can be reduced to a TM (such as your PC, a parallel processor, a neuron, etc) it also can compute anything computable. There are several important correllarys to that that I think you are missing.

This means that any combination of computable things is still computable. So, you hook 10 serial processors together into a parallel network, it still reduces to a TM, and is computable. eleventy nine trillion neurons hooked together with a sophisticated hormone/chemical infrastructure - computable. Etc.

Furthermore, computable objects can only do computable things. There is no way to cobble up transisters to come up with something uncomputable. Likewise, neurons. So, if neurons are computable (you've agreed they are), then the whole brain is computable.

Therefore, the whole brain can be replaced with one single TM - a paper tape, a motor moving that tape, and a pen! that writes one on that tape (and the list of instructions, of course).

That's information science 101 - covered in any undergrad course.

Cogs can be made into a computer, hence a TM, hence cogs can compute anything computable.

But lets make an actual TM. Now, we have a machine that writes 1's on a piece of paper that replaces a brain - it can do everything a brain can do, unless you say something in the brain is not computable (nobel prize for that one). it's a pen, paper, and some instructions for how to write the data on the paper.

Note I pointed out parallel machines, of any complexity, so long as they are made of computable elements, reduce to a TM. Consult the literature for that one, the proof is too much for a forum post.

For convenience, let's consider a wired brain.

With a wired brain, you can actually move information around as in a human brain. You can set up the modules, aggregate the data, route it around.
Yup. And you can do exactly the same thing with a TM, which is nothing more than pen and paper, with a few simple rules as to how to move the pen around on the paper.


A pencil, however, can't actually do any of that. Not in reality.
yes, in reality. it just so happens a human is pushing that pencil around in your thought experiment. In my thought experiment, the pen is being pushed around by a TM.

So the comparison to swaying daisies or waterbuffalo farts is apt. They can't generate consciousness, either.
Inapt because there is no program controlling the swaying. The TM, pushing a pen on paper, has a program.

Of course, the movement of the pencil can indeed draw symbols which helps a human brain to trace, symbolically, streams of data by performing its own computations. But it's not going on in the pencil, as it is in the wired brain.
Only because you are thinking about it dualistically. You are thinking about the pencil brain, then posit something else (the human brain) interpreting those pencil pushes. The TM removes that human brain. All there is is the TM. If attached to a little robot body that robot would run around, play, argue about TM, learn to play the cello, etc., and neither piggy or roger would have any idea what 10010010101010011010101011111101011011.... means on that paper tape.

In fact, it's not going on in the mind either -- not physically. There are physical analogs to that activity, but it's not really happening objectively.
I hope you see it really is happening. There really is a pencil, really is paper tape, really is a program in the TM, really is patterns on that tape, really is patterns on the tape responding to other patterns on the tape, etc. No human brain trying to figure out that 010101010010010101010101010101001010100101010010100... represents a bee. Just a motor, a pencil, a paper tape, and a really long program.

The question is, do we actually have the kind of data coherence that's needed for a physical brain to do the physical work that's required for a real being to actually maintain consciousness in reality?
that's what the OP posits. If we didn't have it, of course we wouldn't have consciousness. If I built a TM with real parts and ran it in this world, trying to process real time inputs (watch my dog run accross the floor) it'll be way, way, way, way too slow. But we are talking about slowing the inputs and outputs to the speed of the TM. That reduces to the case of relativity. You are running 1/googleplex of some being in a galaxy receeding from us at almost the speed of light, yet you are conscious. So long as the inputs and the brain are running at compatible speeds, there is consciousness.


*google TM for a full definition, there's a few additional details, such as you need a comparison instruction (if the tape read is one, move right and write 0, otherwise move right and write 1). The wikipedia article is pretty robust on this point.
 
Last edited:
Right. But the 'tree recognizing' structure in your brain will light up when it sees that, and only that sequence. To it, it means 'roger sees a tree'. It could just happen that that same pattern presented to another module would mean "my left pinkie toe hurts". Sure, chances are huge that some other pattern represents that particular sensation, but in a sense it is all arbitrary.
But in the context of the module, or of the robot brain as a whole, there is only one possible interpretation of the bits that represent the tree? Given enough knowledge there is only one solution to what each of the representations in our robot brain refer to? It isn't possible to conceive of a robot brain, no matter how insanely designed for which this would not be the case? We're talking about a giant suduku with only one solution, yes?

No, just what I said above. To take it back to computers, a two's compliment computer and a BCD computer represent -34 with completely different 0/1 combinations. It's all in how the modules interpret the signals presented to it.
I think you think I don't think context is important in the interpretation of the bits. Clearly it is. Sorry if I implied otherwise.
 
????!?!??!?!? You're the one claiming going slow stops consciousness.

In the example you gave -- time dilation -- there would be no loss of coherence in the information being processed in the brain of either subject. From the point of view of the subjects, nothing has slowed down at all.

But I don't think Paul's example has anything to do with time dilation.

I believe he's asking about slowing the rate of neural (or circuit) activity.

If we single-step the process, certainly we lose all real-time coherence of macro-scale information. Consciousness would be impossible under those circumstances, because you couldn't have highly processed information feeding into the modules that produce consciousness and being coordinated there in real time.

It's not like an assembly line that gets parts. Impulses don't stick around, they vanish.

If the brain slowed to one impulse a second, then because those impulses are ephemeral, again you'd lose coherence on the macro scale.
 
You can learn about what happens to a person with a normal brain when typical input is stopped. See the experiences of those subjects in sensory deprivation experiments. Even when fully conscious (awake), the experience is very much like dream state.

But their neurons are still firing at speed, and although they're sensory-deprived, the brain is self-feeding; it produces outputs which are its own inputs (e.g. dreaming).
 
roger, thank you for post 267. An excellent post. I'll have to get to it tonight.

I'll say up front that the TM you're describing is indeed qualitatively different from the pencil-pushing human.

The question of whether that sort of apparatus could be a conscious machine is an interesting one.

More later -Piggy
 
In the example you gave -- time dilation -- there would be no loss of coherence in the information being processed in the brain of either subject. From the point of view of the subjects, nothing has slowed down at all.

But I don't think Paul's example has anything to do with time dilation.

I believe he's asking about slowing the rate of neural (or circuit) activity.

If we single-step the process, certainly we lose all real-time coherence of macro-scale information. Consciousness would be impossible under those circumstances, because you couldn't have highly processed information feeding into the modules that produce consciousness and being coordinated there in real time.

It's not like an assembly line that gets parts. Impulses don't stick around, they vanish.

If the brain slowed to one impulse a second, then because those impulses are ephemeral, again you'd lose coherence on the macro scale.
Is that what we've been dancing around this whole time! Of course, if the brain, and brain only was slowed down, without changes to the chemical processes, naturally consciousness would cease.
 
I adore the book Consciousness Explained by Daniel Dennett. In it he undertakes explaining consciousness as the result of the interactions between different modules in the brain, much like Piggy describes it, btw. It's not intended to be an accurate description, as we don't have the science of how everything is arranged and behaves yet, but merely intended to be representive - this is one way a brain much like ours could become consciousness.


I liked that one too, but felt somewhat unfulfilled. Afterwards, it was Steven Pinker that gave me satisfaction. But I must strongly recommend Antonio Damasio's The Feeling of What Happens: Body and Emotion in the Making of Consciousness, and also his Descartes' Error: Emotion, Reason, and the Human Brain.

If you have time for an eccentric view on consciousness, try Sir Roger Penrose's The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Penrose is a brilliant mathematician who tried to tackle consciousness from that starting point. A bit like a plumber doing his best to explain a computer.
 
But their neurons are still firing at speed, and although they're sensory-deprived, the brain is self-feeding; it produces outputs which are its own inputs (e.g. dreaming).

Well I don't think the speed of any given firing synapse is "slowed", but the overall neural activity may be significantly reduced. IOW, you can slow the overall brain activity but not the action of any single neuron's firing speed within it.

It is the optic nerves that provide constant and rapid bombardment of input which maintains the fully conscious wakeful state. This stops or is drastically altered when the input is halted (such as sensory deprivation) or brought to a trickle. Extended sensory deprivation (or solitary confinement with no light) may be the worst possible human torture. Being fully awake is not much different than being asleep. When normal people are asleep, they don't really know that they are asleep. But extended sense deprivation may deprive a person from knowing that they are awake even when they are. What kind of a world is that?
 
Last edited:
Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains.

Would the robot be conscious if we ran the computer at a significantly reduced clock speed? What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?

I can't take credit for these questions; they were posted on another forum. The following paper is relevant to this issue:

http://www.biolbull.org/cgi/content/abstract/215/3/216

~~ Paul

"Conscious"? I think you mean "Consciousness" "Having a conscious" has a double meaning. It can mean having a sense of right and wrong. "Awareness" or "being self aware" is what having a "consciousness" means.

THe nature of consciousness, as far as I have read and heard of, is so far a mysterious thing. A machine having a consciousness is one of those things that is regarded as being so far out of probability that is still the stuff of science fiction.
 
Is that what we've been dancing around this whole time! Of course, if the brain, and brain only was slowed down, without changes to the chemical processes, naturally consciousness would cease.

That's what I took it to mean.

What would happen if we slowed only the brain, so the pulses (the information) were more spread out.

If time itself slowed down, relative to something else, there'd be no effect.

(I'll come back later w/ a response to the other post.)
 
A machine having a consciousness is one of those things that is regarded as being so far out of probability that is still the stuff of science fiction.

Not any more. I would bet that most scientists would expect it to happen eventually, if we don't outlaw the processes necessary to get there, or blow ourselves back to the Stone Age.
 
Piggy said:
Enough is known about the brain to conclude that however it makes consciousness, that process bears no resemblance to scribbling with a pencil.
We agree that the mechanism bears no resemblance. I see no reason why both mechanisms cannot produce consciousness, since they are computationally equivalent (modulo any real-time-sensitive processes).

~~ Paul
 
Piggy said:
If you were to create a brain with electrical circuits, you could build an object that (hypothetically) worked like a human brain, perhaps you could build a consciousness module and have a conscious robot.

However, sitting at a table moving a pencil over paper does nothing that could even hypothetically create a working brain of any kind. It doesn't matter whether you're doodling, or writing formulas which, in your mind, represent the workings of a conscious brain.
Sorry, I don't understand why. Perhaps you could try to explain why, rather than simply repeating your assertion.

You cannot be serious.
About which part of what I said?
me said:
But, of course, we all know that I can hand simulate a typical computer program and produce the same answers. Why is a computer program that produces consciousness a special case?

~~ Paul
 

Back
Top Bottom