• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

I can't take credit for these questions; they were posted on another forum. The following paper is relevant to this issue:

biolbull.org/cgi/content/abstract/215/3/216

~~ Paul

Just read this abstract. It seems to imply some sort of Cartesian Materialism with its "qualia spaces"; has this been discussed in this thread yet?
 
Why would quantum coherence allow something more powerful than a TM? Any quantum dynamics can be simulated on a classical computer, although it might take an extremely large number of computations.
That is exactly what I was thinking too. As far as I know, QM may be able to do some things smarter (possibly), but it is still within the framework of a TM. The other thing QM can add is absolute randomness, precisely what we do not think we see in how the brain works. Unless we are making random decisions on the deepest level, which are then modified by other brain functions before they are acted on.

At any rate, a TM equipped with a QM-based random number generator, would bridge that gap nicely.
 
Ok. But if a TM system does that, then it is simulating the action of the hammer, not performing it, correct? No real nails get driven.

Yes. But you asked us to stipulate that the robot is conscious, i.e. that real consciousness is happening (just like real addition is happening when I ask you how much is seven and three and you answer ten).


ETA: If the input/output relationships of the brain are "typically described in terms of computation", is that sufficient to conclude that the brain is, in fact, a TM?

As far as we can tell, the brain does nothing other than process information.

So that would be a "yes." Unless magic pixies in the microtubules somehow make it not a TM.
 
Hey, y'all. I haven't read the responses to my posts from last night yet, but I woke up this morning realizing I'd posted a bunch of junk. I shouldn't try to think about this kind of topic when I can't sleep and am on auto-pilot. I end up taking short-cuts and making mistakes.

I want to back up and try to take a more considered approach so I can better understand some of the concepts I'm unfamiliar with. I'll do that this evening.

Sorry about the jabber. I'll make an effort not to ramble like that again.

Later -Piggy
 
That is exactly what I was thinking too. As far as I know, QM may be able to do some things smarter (possibly), but it is still within the framework of a TM. The other thing QM can add is absolute randomness, precisely what we do not think we see in how the brain works. Unless we are making random decisions on the deepest level, which are then modified by other brain functions before they are acted on.

At any rate, a TM equipped with a QM-based random number generator, would bridge that gap nicely.

And if you buy the "many worlds" interpretation (as I do), there's not even any randomness involved. Regardless, quantum coherence is a separate issue - non-coherent quantum systems are random (or not) in just the same way as coherent ones.
 
@drkitten: Of course it's irrelevant for our purposes here, but isn't the brain certainly less powerful than a TM since a TM has an infinite memory?
 
@drkitten: Of course it's irrelevant for our purposes here, but isn't the brain certainly less powerful than a TM since a TM has an infinite memory?

And also an infinite lifespan. No brain can solve a problem that will take several million years to complete (unlike Doug Adams' Deep Thought).
 
The essential components of a Turing Machine (for our purposes) are:

1) State information storage.
2) A means of recognizing part of that state and using it to modify the state to generate the next state.
3) A means to input and output some of that state.

That's it. The brain's neurons with their malleable interconnections map into that. The interesting part is in the details of next-state generation. Also remember that a TM can represent particular states using a form of fuzzy logic, where every representation is given a level of certainty.

Just how a particular function is implemented can vary widely. The exact same results for a function can be gotten from hard-wired logic gates, a program, neural-net hardware, neural-net program, etc. There's no need for a one-to-one fine-level mapping unless we're reverse-engineering parts of the brain, e.g., Blue Gene.

Well, I'm not asking for a one-to-one fine-level mapping, actually.

But, as you see it, which components of the brain -- be they actual physical structures or neural pathways or even coordinated actions or anything else, what have you -- do you think correspond to each of those components?

Or alternately, to use the list of components from a link roger gave earlier, which components of the brain correspond to the tape, head, table, state register, and (human) computer?

"Information" is not a metaphor here-- it's real. If I ask you to multiply two numbers, you receive information that includes those numbers and the command to multiply them. You can then return the number. I can ask a calculator to do the exact same multiplication and (hopefully!) get the same answer. Sure, I have to speak the calculator's language of buttons and display, but the critical information is the same.

I don't see the purpose of limiting the word "understand" to just humans, any more than we limit the word "memory". If a calculator gives me what I recognize as the right answer when I press its "multiply" key, then I say that the calculator fundamentally understands that keypress to mean "multiply". Any understanding that we may have beyond that doesn't take that away.

"Information" is always a metaphor, or at least an abstraction.

Yes, you can ask me to multiply two numbers, and I can give you an answer, and you can ask computer to do that and get an answer, but that's extremely high-level and symbolic. Only a human observer would be aware that any such thing had happened, even if a non-human observer could witness every physical action involved in both cases.

(Compare that to, say, a tree limb falling onto a rock.)

If we want to compare the brain and the computer, we have to compare what happens in the interim and see if we're getting these answers by similar means. (That is, we do if we want to establish whether these structures are entirely analogous or, rather, functionally analogous for a limited range of tasks.)

In the brain, sound waves are picked up and pattern-matched, and (here I'm afraid we have to deal with a black box) this pattern matching process leads to a physical response.

Is the computer actually going through a similar process if I enter the problem into an application and the patterns on a screen change as a result?

Here's why it matters:

If we say that the brain is not more powerful than a computer (TM), then we are saying (as I understand it) that this bodily organ cannot do anything a computer cannot do.

Therefore it is imperative that we do not assume from the beginning that brains and computers do the same kind of thing, and only that kind of thing. So we must be very careful that we're being entirely accurate and precise if we assert that they're both doing "information processing".

There are a few ways I can think of to ensure that the analogy holds.

First, we can show that brains and computers are, in their entirety, TMs. And we can do that by finding counterparts to the necessary components of TMs in the brain, with nothing left over.

Or, we could simply make a computer do everything the brain does. (This would be like comparing my full-size car to my functional model car.) And so far they've done a lot, but not all, so we're not there yet.

Or, we could show that what the brain does physically is entirely analogous to what a computer does. For instance, imagine we replace axons with wires or some such, and replace terminal buttons perhaps with nodes that release a tiny electrical charge when stimulated by the axon-wire, and replace dendrites with plates that send a charge down the next axon-wire when there's a sufficient charge in the synapse built up by the terminal nodes. Some setup like that.

So now we have a mechanical replica brain. Would that machine be doing what a computer does?
 
Yes. But you asked us to stipulate that the robot is conscious, i.e. that real consciousness is happening (just like real addition is happening when I ask you how much is seven and three and you answer ten).

Yes, we stipulate a conscious robot. But we can only stipulate that this robot can have a brain like a contemporary computer if we can establish with certainty that consciousness is a function which such computers can perform.

Since we don't know exactly how that function is performed, in order to make that assertion, we have to establish that the two objects are completely functionally comparable.

Showing that they can both perform a certain set of tasks (perhaps by different means) does not assure us of this.

As far as we can tell, the brain does nothing other than process information.

But here we still seem to be on shaky ground (for now) because "process information" is a very loose metaphor. We can't rely on it to reach the conclusion that all tasks performed by the one are performable by the other.

To reach that conclusion, we must establish that they're the same kind of information processors. Which they may well be.
 
Yes, we stipulate a conscious robot. But we can only stipulate that this robot can have a brain like a contemporary computer if we can establish with certainty that consciousness is a function which such computers can perform.

Exactly wrong. If we could establish with certainty that such robots existed, we would have no need to stipulate it.

Since we don't know exactly how that function is performed, in order to make that assertion, we have to establish that the two objects are completely functionally comparable.

Again, exactly wrong. Since we've proven that the only possible physical set of objects that can perform the task are completely functionally comparable, if the task can be done at all, it can be done by such an object.

There are three possible cases:

Case 1 : robots cannot be conscious. You have asked us to stipulate that this is false.
Case 2 : robots can be conscious by virtue of the program in their TM.
Case 3 : robots can be conscious by virtue of the program running in their non-computational brain. We have proven beyond practical doubt that such "non-computation brains" violate the laws of physics; therefore case 3 is also rejected.

Therefore, case 2 must be true.

To reach that conclusion, we must establish that they're the same kind of information processors.

We have established that there is only one "kind" of information processor. Therefore, any two information processors are "the same kind."
 
Well, I'm not asking for a one-to-one fine-level mapping, actually.

But, as you see it, which components of the brain -- be they actual physical structures or neural pathways or even coordinated actions or anything else, what have you -- do you think correspond to each of those components?

Or alternately, to use the list of components from a link roger gave earlier, which components of the brain correspond to the tape, head, table, state register, and (human) computer?

Turing machines are not required to have tapes, heads, tables, and/or state registers. Conway's Game-of-Life is provably Turing-equivalent and has none of those.
 
Well, I'm not asking for a one-to-one fine-level mapping, actually.

But, as you see it, which components of the brain -- be they actual physical structures or neural pathways or even coordinated actions or anything else, what have you -- do you think correspond to each of those components?

Or alternately, to use the list of components from a link roger gave earlier, which components of the brain correspond to the tape, head, table, state register, and (human) computer?
I should properly call what I've described as a Finite State Machine, which Turing Machines and all digital computers are "composed" of (in quotes because they can be seen as one FSM per storage bit, a single FSM for all, or any combination in between).

You're unlikely to find a TM at the heart of any computer. TMs are useful in computation theory because they can be proven to be equivalent in function to any other form, but they themselves don't make practical computers, nor, most likely, brains.

FSMs on the other hand are mappable to networks of neurons, the elements of which you're likely familiar with: the state storage mapping into the modifiable synaptic strength and action potential thresholds, and the logic mapping into the summing action of neurons from excitatory and inhibitory inputs. The clock rate of the equivalent FSM can be chosen to be fast enough to bury any improvement in the noise, but not infinite.

"Information" is always a metaphor, or at least an abstraction.

Yes, you can ask me to multiply two numbers, and I can give you an answer, and you can ask computer to do that and get an answer, but that's extremely high-level and symbolic. Only a human observer would be aware that any such thing had happened, even if a non-human observer could witness every physical action involved in both cases.
That's only true if you don't allow non-humans to have awareness by definition. When you can define awareness in non-prejudicial terms, that's unsupported.
(Compare that to, say, a tree limb falling onto a rock.)

If we want to compare the brain and the computer, we have to compare what happens in the interim and see if we're getting these answers by similar means. (That is, we do if we want to establish whether these structures are entirely analogous or, rather, functionally analogous for a limited range of tasks.)

In the brain, sound waves are picked up and pattern-matched, and (here I'm afraid we have to deal with a black box) this pattern matching process leads to a physical response.

Is the computer actually going through a similar process if I enter the problem into an application and the patterns on a screen change as a result?

Here's why it matters:

If we say that the brain is not more powerful than a computer (TM), then we are saying (as I understand it) that this bodily organ cannot do anything a computer cannot do.

Therefore it is imperative that we do not assume from the beginning that brains and computers do the same kind of thing, and only that kind of thing. So we must be very careful that we're being entirely accurate and precise if we assert that they're both doing "information processing".

There are a few ways I can think of to ensure that the analogy holds.

First, we can show that brains and computers are, in their entirety, TMs. And we can do that by finding counterparts to the necessary components of TMs in the brain, with nothing left over.

Or, we could simply make a computer do everything the brain does. (This would be like comparing my full-size car to my functional model car.) And so far they've done a lot, but not all, so we're not there yet.

Or, we could show that what the brain does physically is entirely analogous to what a computer does. For instance, imagine we replace axons with wires or some such, and replace terminal buttons perhaps with nodes that release a tiny electrical charge when stimulated by the axon-wire, and replace dendrites with plates that send a charge down the next axon-wire when there's a sufficient charge in the synapse built up by the terminal nodes. Some setup like that.

So now we have a mechanical replica brain. Would that machine be doing what a computer does?
Yes, that machine would be equivalent to a computer running a program that was functionally equivalent to the hardware, or was the hardware itself, or any combination in between. I don't know what could convince you of that short of a course in digital logic and programming, where such equivalency is continually made use of.
 
Exactly wrong. If we could establish with certainty that such robots existed, we would have no need to stipulate it.

You're agreeing with me here.

We simply stipulate a conscious robot.

We do not stipulate what sort of brain this robot has.
 
Again, exactly wrong. Since we've proven that the only possible physical set of objects that can perform the task are completely functionally comparable, if the task can be done at all, it can be done by such an object.

There are three possible cases:

Case 1 : robots cannot be conscious. You have asked us to stipulate that this is false.
Case 2 : robots can be conscious by virtue of the program in their TM.
Case 3 : robots can be conscious by virtue of the program running in their non-computational brain. We have proven beyond practical doubt that such "non-computation brains" violate the laws of physics; therefore case 3 is also rejected.

Therefore, case 2 must be true.

Hold on there.

Obviously, case 1 is out. That's trivial.

Case 2 we do not need to consider, unless you can demonstrate that consciousness can/must be produced by a "program in [a] TM brain". Which has never been done, to my knowledge, and no evidence has been presented here that it has been done.

As to case 3, I'll need some further explanation to evaluate it.

When you say that the human brain is "computational", do you mean that it is "computational" in the same way that any physical system is computational? Or something else?
 
We have established that there is only one "kind" of information processor. Therefore, any two information processors are "the same kind."

Pardon me, but the hell we have.

Any physical system that changes can be classified as an information processor if you like.

So it seems that when you say "information processor", you are referring specifically to TMs.

In that case, we need to see some convincing argument that the human brain is -- and is only -- a TM.
 
Turing machines are not required to have tapes, heads, tables, and/or state registers. Conway's Game-of-Life is provably Turing-equivalent and has none of those.

Well, that's all well and good.

Doesn't address the question, tho.

Now mind you, I'm not saying that the brain is not a TM. It may well be. Would be interesting to know.

But if you say it is, then we need to outline the necessary components of such a device and describe which components of such a device correspond to analogous components of the brain, with nothing left over.

I'm not simply going to accept such a claim on faith, especially given the enormous ramifications.
 
That's only true if you don't allow non-humans to have awareness by definition. When you can define awareness in non-prejudicial terms, that's unsupported.

Oh, I do allow non-humans to have awareness.

My point was only that the description of the physical action was abstract and metaphorical.

If a dog is aware, then when a tree limb falls on a rock, if the dog observes the event, it perceives that this action has occurred, just as a human would.

But if a dog observes a person asking another person or a computer to perform a multiplication and subsequently receiving an answer, it cannot perceive this action in the way I just described, because that action is entirely symbolic, not physical.

So we need to be very careful here to consistently distinguish between physical reality, on the one hand, and symbolic representation on the other.
 
Yes, that machine would be equivalent to a computer running a program that was functionally equivalent to the hardware, or was the hardware itself, or any combination in between. I don't know what could convince you of that short of a course in digital logic and programming, where such equivalency is continually made use of.

So you're saying it's functionally equivalent?

How can you assert that such a machine (our replica brain) is entirely functionally equivalent in all aspects to a computer of the type we're accustomed to?

(Again, honest question, not an attempt at a "gotcha", which would be foolish, since I don't pretend to know the answer.)
 
FSMs on the other hand are mappable to networks of neurons, the elements of which you're likely familiar with: the state storage mapping into the modifiable synaptic strength and action potential thresholds, and the logic mapping into the summing action of neurons from excitatory and inhibitory inputs. The clock rate of the equivalent FSM can be chosen to be fast enough to bury any improvement in the noise, but not infinite.

Before I ask you any more questions about this, I'll need some time to become a bit more familiar with finite state machines, so I don't waste too much of your time.

The first bit seems fairly straightforward. I can see how state storage is analogous to action potential thresholds. Below this point, 0, above this point, 1.

If I take you right, the logic is essentially analogous to the configuration of the entire neural network?
 

Back
Top Bottom