• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

What is meant by "more powerful"? Does that simply mean that it can do things that a TM can't?

There are input-output mappings that a TM cannot compute. A good example is a mapping that will read in a Java program and output "yes" if the program will eventually generate output and "no" if it will not. It is provably impossible to write a Java program that can make this determination correctly in all cases. It is also provably impossible to write a TM program that can make this determination. On the other hand, we can invent a magic-pixie based system that can do that.

After all, the heart and the liver can do something a TM can't, as can a hammer, so why is it that we assume the brain cannot?

Because the behavior of a hammer is not typically described in terms of computation or of input/output relationships, while the brain is. If you describe what a hammer does in terms of input-output relationships, then a TM can indeed solve a problem isomorphic to the input-output relationships characteristic of hammers.




Are we talking about only the lowest-level outputs? If so, does it necessarily follow that there can be no loss of functionality for higher-level tasks?

There is no such thing as "lowest-level" or "highest-level" outputs. Input is input, output is output.


For instance, a computer that plays digital movies loses the ability to "play movies" when running very slowly, even though it performs all its calculations.

It does not. In fact, that's how a lot of CGI movies are created. One frame at a time, taking several seconds or minutes per frame. Burn it onto a DVD and watch it, and it looks just as the director intended.
 
Because the behavior of a hammer is not typically described in terms of computation or of input/output relationships, while the brain is. If you describe what a hammer does in terms of input-output relationships, then a TM can indeed solve a problem isomorphic to the input-output relationships characteristic of hammers.

Ok. But if a TM system does that, then it is simulating the action of the hammer, not performing it, correct? No real nails get driven.

ETA: If the input/output relationships of the brain are "typically described in terms of computation", is that sufficient to conclude that the brain is, in fact, a TM?
 
Last edited:
There is no such thing as "lowest-level" or "highest-level" outputs. Input is input, output is output.

Sure, if you're only looking at models. Is that true in physical reality?

Again, this is a genuine question, not some sort of attempt at a "gotcha".
 
It does not. In fact, that's how a lot of CGI movies are created. One frame at a time, taking several seconds or minutes per frame. Burn it onto a DVD and watch it, and it looks just as the director intended.

Right. Play it at speed and you're "playing a movie". But slow down the operating speed and you're not "playing a movie". I didn't ask about editing a movie.
 
Regarding high-level and low-level tasks, let me offer this example.

A water hose will produce the same output at any operating speed. Slow down the inputs and you'll get the same outputs over a longer period of time.

But you can only pressure-wash your house at a high operating speed.

Pressure-washing your house is a high-level task. It makes no sense to say that you can perform that high-level task at any operating speed just because the outputs are the same at any operating speed.

And although it is not a hydraulic system (as once thought) there is no denying that the brain is a purely physical system. When we describe it in abstract terms, we are being metaphorical.

Why, then, should it be impossible that any function of the brain should depend on sufficient operating speed?
 
I'm sorry, I don't understand the analogy. It seems unnecessarily confusing to me because, of course, we can speak of individual neurons, so to use the analogy of an "I'm aware" neuron just complicates the discussion on a topic that's already very hard to discuss.

I think it's more accurate to say that Joe's brain momentarily is in a state of conscious awareness of the event.

Perhaps not the best analogy. But my point is that it is all just neurons (and other brain stuff) in different states. One step forward, and they naturally evolve to the next state.

But if we're talking about slowing down a processor within the framework of a nonchanging external temporal referent, then we have to ask ourselves whether the high-level tasks can be maintained in that environment, even if the low-level outputs are the same.

Then I agree, but then what is this temporal referent? Natural law?
 
Perhaps not the best analogy. But my point is that it is all just neurons (and other brain stuff) in different states. One step forward, and they naturally evolve to the next state.

Then I agree, but then what is this temporal referent? Natural law?

Yes, neuronal activity does progress naturally. I'm afraid I don't understand your point.

I also don't get the reference to Natural Law.
 
Yes, neuronal activity does progress naturally. I'm afraid I don't understand your point.

I just don't understand why we would need another layer on top of that to produce consciousness.

I also don't get the reference to Natural Law.

Sorry if I come across as a little confused (I am after reading this thread), but what is this temporal referent?
 
Last edited:
I just don't understand why we would need another layer on top of that to produce consciousness.

I'm not reading you clearly.

Another layer on top of what?

Sorry if I come across as a little confused (I am after reading this thread), but what is this temporal referent?

Well, it's the surrounding physical environment.

It's like the laser example or the pressure-washing example I gave earlier.

If you're going to pressure-wash your house, you can't consider the pressure-washing apparatus in isolation, because it's a physical set-up in real spacetime.

If you slow your system down relative to the environment, you lose pressure and can't perform the task you need to perform.

If you made an abstract model of the system, with no reference to the environment, you'd conclude that the outputs of the system were unrelated to operating speed. Speed it up or slow it down, you still have a flow of water out of the system that is consistent with the inputs.

But in real space, it makes a difference how fast the water is moving through the hose, because if it moves too slow, you can't pressure wash your house.

Same with the laser. A computer that controls a laser can operate at any speed, and this does not change anything about the informational outputs of the system. But it does make a difference on a higher level, since you don't get the laser if the system is run too slowly.

So let's say you model the brain on a computer. You can run the simulation at any speed, and the computer will calculate everything exactly the same. The computer can be run at any operating speed an come up with the same output.

But that doesn't necessarily mean that an actual physical brain can be run at any speed and perform the same high-level tasks at all speeds.
 
Regarding high-level and low-level tasks, let me offer this example.

A water hose will produce the same output at any operating speed. Slow down the inputs and you'll get the same outputs over a longer period of time.

But you can only pressure-wash your house at a high operating speed.

Pressure-washing your house is a high-level task. It makes no sense to say that you can perform that high-level task at any operating speed just because the outputs are the same at any operating speed.

And although it is not a hydraulic system (as once thought) there is no denying that the brain is a purely physical system. When we describe it in abstract terms, we are being metaphorical.

Why, then, should it be impossible that any function of the brain should depend on sufficient operating speed?
Remember that brains just process information. They don't pump blood or pound nails. Nor can TMs. Both a brain and a TM can hold a representation of a hammer, but no, that's not a real hammer.

Analogies between information and water will fall apart quickly because information can be duplicated and lost, among other things.

A closer analogy is a crank-driven adding machine. If it's a true-synchronous machine you should be able to turn the crank as slow as you like and yet still get the same answer after the same number of turns.

It is possible to design the machine so it doesn't work right below a certain speed, say if some part of the mechanism relies on inertia. In modern computers the main DRAM memory has this limitation: it's dynamic and needs continuous refreshing. But note that when single-stepping a program (as in the OP), the refreshing of DRAM itself isn't slowed down. Only the steps of the program are executed at a slower rate.
 
Remember that brains just process information. They don't pump blood or pound nails.

I disagree.

They don't "process information". That's a metaphorical abstraction.

Brains are physical organs just like hearts and livers. What they do is entirely physical. 100%, no ifs, ands, or buts.

There ain't no magic going on, and "process information" is an entirely human metaphor.

We make a fundamental mistake if we confuse our abstractions for physical reality.
 
Pulvinar;5024505A said:
closer analogy is a crank-driven adding machine. If it's a true-synchronous machine you should be able to turn the crank as slow as you like and yet still get the same answer after the same number of turns.

The brain does not add. I defy you to demonstrate that it does.

And when you get right down to it, neither does your crank-driven "adding machine". All the "adding" is entirely in the mind of the homunculus interpreting the actual physical outcomes.

No real "adding" has occurred.

ETA: And there ain't no homonculus in the brain.
 
Last edited:
It is possible to design the machine so it doesn't work right below a certain speed, say if some part of the mechanism relies on inertia. In modern computers the main DRAM memory has this limitation: it's dynamic and needs continuous refreshing. But note that when single-stepping a program (as in the OP), the refreshing of DRAM itself isn't slowed down. Only the steps of the program are executed at a slower rate.

Again, this makes sense only if you incorporate a human interpreter of what's going on.

There is no such interpreter in the brain.
 
I disagree.

They don't "process information". That's a metaphorical abstraction.

Brains are physical organs just like hearts and livers. What they do is entirely physical. 100%, no ifs, ands, or buts.

There ain't no magic going on, and "process information" is an entirely human metaphor.

We make a fundamental mistake if we confuse our abstractions for physical reality.

Information is a 100% physical pattern. No magic required. Is your brain not taking in information in the form of patterns of nerve impulses, and sending out other patterns of nerve impulses to your muscles?
 
The brain does not add. I defy you to demonstrate that it does.
It was a simple analogy of single-stepping, but still I don't know how you can say that a brain doesn't add: mine is sure capable of that. Demonstration: 1+1=2 (no, I didn't cheat and use a calculator!)

And when you get right down to it, neither does your crank-driven "adding machine". All the "adding" is entirely in the mind of the homunculus interpreting the actual physical outcomes.

No real "adding" has occurred.

ETA: And there ain't no homonculus in the brain.
Why the need for a homonculus?

Do you do all math in your head because calculators don't do "real" math?
 
Information is a 100% physical pattern. No magic required. Is your brain not taking in information in the form of patterns of nerve impulses, and sending out other patterns of nerve impulses to your muscles?

Agreed, as long as we agree that our term "information" is an abstraction and a metaphor.

As a practical matter, we have to speak in terms of information if we're to have a reasonable conversation.

But we should be careful not to confuse our abstractions for physical reality.

We can speak of computers and brains in terms of information processing.

But we should not allow ourselves to fall into the error of assuming that just because we use similar metaphors for these objects, we are therefore speaking of analogous systems.

If brains and computers are truly analogous, then we should be able to trace the direct correspondence.

If brains are Turing machines -- and they very well may be -- then we should be able to identify which components of a human brain correspond directly to the essential components of a Turing machine.
 
It was a simple analogy of single-stepping, but still I don't know how you can say that a brain doesn't add: mine is sure capable of that. Demonstration: 1+1=2 (no, I didn't cheat and use a calculator!)

So tell me, what physical structure of the brain adds 1 and 1 and gets 2?

ETA: It seems to me that there are structures which can be said to do this. The question is, do they directly correspond to what a computer does when it is said to add 1 and 1 to get 2?
 
Last edited:
Do you do all math in your head because calculators don't do "real" math?

That's a badly formed question.

We build calculators so that we can provide inputs and get outputs which we can interpret as answers.

Brains don't operate that way.

Brains have no programmers. Brains have no interpreters.

Brains are purely physical objects which produce purely physical responses.

Calculators are also physical objects which produce purely physical responses, but they require human observers to interpret their physical responses as representative of calculations.

Remove the human observer, and you're left with a bunch of merely physical actions which don't actually add 1 and 1 and get 2.

In other words, the calculations done by calculators are not actual. They are entirely symbolic. They are purely symbolic "information processors" on that level.

Of course, as physical systems, they are "information processors" on another level. But so are muscles and stars.

Brains, as purely physical objects, are also "information processors" in this trivial sense. But in that sense, they cannot be distinguished from muscles, stars, or calculators.

Brains don't "process information" in the way that calculators or computers do on a symbolic level. Try as you might, you won't find any analogs.

The brain doesn't actually maniuplate symbols. We speak of them as if they do, but in hard physical reality, they are no different from hearts and livers.
 
Agreed, as long as we agree that our term "information" is an abstraction and a metaphor.

As a practical matter, we have to speak in terms of information if we're to have a reasonable conversation.

But we should be careful not to confuse our abstractions for physical reality.

We can speak of computers and brains in terms of information processing.

But we should not allow ourselves to fall into the error of assuming that just because we use similar metaphors for these objects, we are therefore speaking of analogous systems.

If brains and computers are truly analogous, then we should be able to trace the direct correspondence.

If brains are Turing machines -- and they very well may be -- then we should be able to identify which components of a human brain correspond directly to the essential components of a Turing machine.
The essential components of a Turing Machine (for our purposes) are:

1) State information storage.
2) A means of recognizing part of that state and using it to modify the state to generate the next state.
3) A means to input and output some of that state.

That's it. The brain's neurons with their malleable interconnections map into that. The interesting part is in the details of next-state generation. Also remember that a TM can represent particular states using a form of fuzzy logic, where every representation is given a level of certainty.

Just how a particular function is implemented can vary widely. The exact same results for a function can be gotten from hard-wired logic gates, a program, neural-net hardware, neural-net program, etc. There's no need for a one-to-one fine-level mapping unless we're reverse-engineering parts of the brain, e.g., Blue Gene.

"Information" is not a metaphor here-- it's real. If I ask you to multiply two numbers, you receive information that includes those numbers and the command to multiply them. You can then return the number. I can ask a calculator to do the exact same multiplication and (hopefully!) get the same answer. Sure, I have to speak the calculator's language of buttons and display, but the critical information is the same.

I don't see the purpose of limiting the word "understand" to just humans, any more than we limit the word "memory". If a calculator gives me what I recognize as the right answer when I press its "multiply" key, then I say that the calculator fundamentally understands that keypress to mean "multiply". Any understanding that we may have beyond that doesn't take that away.
 
Consider a conscious robot with a brain composed of a computer running sophisticated software. Let's assume that the appropriately organized software is conscious in a sense similar to that of human brains.

Would the robot be conscious if we ran the computer at a significantly reduced clock speed? What if we single-stepped the program? What would this consciousness be like if we hand-executed the code with pencil and paper?

I can't take credit for these questions; they were posted on another forum. The following paper is relevant to this issue:


~~ Paul

Has anyone here read the essay "Fast Thinking" by Dennett? (Its a chapter in The Intentional Stance) And deals with this question specifically.
 

Back
Top Bottom