• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

Unfortunately, our language is inadequate to the material, I'm afraid.

We could talk about 3 sets here, which we might call perception, consciousness, and self-awareness.

(Terms of convenience there)
Yes, I agree. I think it is easy to lose one's way with all the different terms that have a closely similar meaning.

One term that I miss is a term for having brain activity, without awareness.

It is often claimed, though as far as I know never proved, that some people in an unconscious state are still able to perceive - and store in memory - certain things that happen around them. If this is true, we need a term for this process.
 
Yes, I agree. I think it is easy to lose one's way with all the different terms that have a closely similar meaning.

One term that I miss is a term for having brain activity, without awareness.

It is often claimed, though as far as I know never proved, that some people in an unconscious state are still able to perceive - and store in memory - certain things that happen around them. If this is true, we need a term for this process.

I believe there are studies showing this effect, but I can't recall enough about them to see if they're on the Web. As I remember, it involved presenting auditory stimuli to sleeping subjects. But I could be wrong about that.

Also ran across a very recent study showing more activity during deep sleep than was previously known.
 
That's it? Those are our choices? Interesting.

More or less.

Positing a TM computer brain is only begging the question if it's not obligatory. If it is obligatory, and it's true that any TM can perform any task at any speed (which seems to assume that all macro-level tasks can be considered accomplished as long as micro-level output is the same), then yeah, we're done.

Btw, has it been demonstrated that the brain in its entirety is a TM? Is this now accepted in the field biology? (Not being a smartass, here, honest question.)

Most biologists don't know or care. Most biologists who know and care believe the brain is a TM, because the alternatives involve magic pixies.

There are really only three alternatives. The first is that the brain is actually less powerful than a TM, which wouldn't really mean much, because we could still build a conscious TM. So we can dismiss that as equivalent to choice #2, the brain is as powerful as a TM.

Choice #3, the brain is more powerful than a TM, suffers from the fundamental problem that we have no way to envision or formalize anything that is more powerful than a TM that doesn't involve an explicit reference to something magical like an "oracle" or the ability to code data finer than the Plank length. While these may not be explicitly impossible, we've never been able to come up with anything at all credible that provides a physical instantiation of such things. Instead, we get idiocies like Penrose, who misunderstands Goedel's theorem (Goedel's theorem says computers make mistakes, hence humans can't be computers, because.... humans never make mistakes? Excuse me) and quantum physics (why do QM effects lead to consciousness in human microtubules, but not in amoebae, which also have them?)

Here's what one noted commentator(a neurologist) has to say:

3.5 To meet this objection, Penrose makes wishful conjectures about properties of microtubules. These proteins, found in all cells, have properties that could conceivably be useful for computation within individual neurons (Hameroff & Watt, 1982). But while it may be conceivable that microtubules maintain quantum coherence within a neuron, it is difficult to imagine how this coherence can be maintained across neurons. Penrose is aware of this problem when he says (p. 372): "... the quantum coherence must leap the synaptic barrier between neuron and neuron. It is not much of a globality if it involves only individual cells!"

3.6 What Penrose's story lacks is an account of how quantum coherence can leap the synaptic barrier. All he says is that evolution is clever, and maybe it has found a way to achieve this impossible sounding feat. He discusses how microtubules can alter synaptic strengths in an interesting way, but nowhere is there any discussion of the nature of synaptic modulations that can be achieved quantum-mechanically but not classically. The quantum nature of neural activity across the brain must be severely restricted, since Penrose concedes that neural firing is occurring classically. I kept rereading Sections 7.6 (Microtubules and consciousness) and 7.7 (A model for a mind) in hopes of finding a plausible argument for how a coherent quantum state could be preserved across the brain. I thought that Penrose might invoke the .5 cm microwaves that have been associated with microtubules, but he was too smart for that. He must have felt that invoking microwaves to achieve quantum coherence across neurons would be too easily disproved. For his hypothesis to have any chance of success, Penrose needs new mechanisms of neural communications other than the presently known electrochemical mechanisms, but given the explanatory power of presently-known mechanisms, it is unlikely that neuroscientists will mount a search for these new and exotic mechanisms soon.

So in other words, "we don't understand cognition. We don't understand QM, either. QM appears magical, so maybe it will give us the magical non-computational properties we need to produce something more powerful than a TM."

So the general opinion is that "more powerful than a TM" is not a realistic option. Which leaves, for lack of anything better, "equivalent to a TM" as literally the only candidate left standing.
 
One thing that I'd like to go ahead and clarify.... Sometimes it's claimed that brain processes involved in conscious awareness are not in any way distinct from other processes.

In a way, that's true, insofar as all brain activity is neuronal activity. But it's only superficially and partially true -- so much so, in fact, that on the whole the statement is false. It contradicts both the experimental evidence and our own experience.

Experimentally, we have to look at the subliminal studies, the deep sleep studies, the blind maze study, and the "conscious signature" study.

Studies on subliminal stimuli clearly show that we are not aware of having been exposed to them, yet the brain processes them and stores them in memory -- we know this because they can influence subsequent behavior.

The blind maze study also indicates that the difference between what we consciously experience and what we don't is not a case of having been aware and forgetting, because we can communicate with subjects as they navigate the maze.

Deep sleep studies demonstrate that in states of awareness -- dreaming sleep and wakefulness -- there are patterns of communication across modules of the brain which cease when we're not in states of conscious awareness.

The conscious signature study took it even farther by actually identifying "signature" patterns of brain-wide communication that occur when we're exposed to stimuli that we report being aware of, but which don't occur when we're exposed to subliminal stimuli.

Taken together, these studies demonstrate that conscious experience is a specific type of brain activity, involving the active coordination of highly processed data. Consciousness is something the brain does because it's designed to do it, just as maintaining our heartbeat is something the brain does because it's designed to do it.

(And this is how we know that thermostats aren't conscious -- they're not designed to be conscious.)

With regard to experience, the cocktail party effect is one example.

When we hear our name -- or something very close to it -- in the background buzz of conversation, it leaps out at us as if it were spoken more loudly and clearly than all the rest of the speech, even though it was not.

That's because our brains are receiving and processing all the speech all the time. When the pattern of your name appears in the data stream, it presents a match with a pattern stored in your memory -- a pattern that gets very high usage, and which is itself associated with a slew of other high-use patterns.

The number and quality of matches is so strong that this particular pattern is served up (along with many of the associations, as well as current environmental data, in a kind of bundle) to the processors that control your conscious experience. The result is that you become consciously aware of that "a woman to my right just said my name" while all the rest of the speech remains background noise.

If subsequent input from that voice continues to strongly match stored patterns, you'll continue to attend to what she's saying. If nothing matches (unknown voice, words of no particular interest) it's likely that her vocalizations will immediately sink back into the background and you won't even be able to say who spoke your name or what they were talking about -- it will remain a blip on your radar.

So we should be clear that conscious awareness arises from specific processes which are designed to produce it.
 
More or less.

Most biologists don't know or care. Most biologists who know and care believe the brain is a TM, because the alternatives involve magic pixies.

There are really only three alternatives. The first is that the brain is actually less powerful than a TM, which wouldn't really mean much, because we could still build a conscious TM. So we can dismiss that as equivalent to choice #2, the brain is as powerful as a TM.

Choice #3, the brain is more powerful than a TM, suffers from the fundamental problem that we have no way to envision or formalize anything that is more powerful than a TM that doesn't involve an explicit reference to something magical like an "oracle" or the ability to code data finer than the Plank length. While these may not be explicitly impossible, we've never been able to come up with anything at all credible that provides a physical instantiation of such things. Instead, we get idiocies like Penrose, who misunderstands Goedel's theorem (Goedel's theorem says computers make mistakes, hence humans can't be computers, because.... humans never make mistakes? Excuse me) and quantum physics (why do QM effects lead to consciousness in human microtubules, but not in amoebae, which also have them?)

Here's what one noted commentator(a neurologist) has to say:

So in other words, "we don't understand cognition. We don't understand QM, either. QM appears magical, so maybe it will give us the magical non-computational properties we need to produce something more powerful than a TM."

So the general opinion is that "more powerful than a TM" is not a realistic option. Which leaves, for lack of anything better, "equivalent to a TM" as literally the only candidate left standing.

Cool, thanks. Btw, I read Searle's refutation and I found it fatally flawed as well.
 
There are really only three alternatives. The first is that the brain is actually less powerful than a TM, which wouldn't really mean much, because we could still build a conscious TM. So we can dismiss that as equivalent to choice #2, the brain is as powerful as a TM.

Choice #3, the brain is more powerful than a TM, suffers from the fundamental problem that we have no way to envision or formalize anything that is more powerful than a TM that doesn't involve an explicit reference to something magical like an "oracle" or the ability to code data finer than the Plank length.

I suppose, just for completeness, I should mention and discard choice #4, the idea that there are models of computation that are incomparable to a Turing machine (i.e. more powerful in some ways and less powerful in others). We have even less support for this idea than for choice #3. While we can at least imagine how magic pixies could make a Turing machine MORE powerful, no one has ever been able to produce a [sensible] theory of computation that is incomparable to a TM -- and anyone who did find such a theory would almost certainly revolutionize the field. We have come up with zillions of theories of computation, and zillions of formalisms,.... but they all end up being provably Turing-equivalent.
 
drkitten,

Are analog computers Turing machines, or only to an arbitrary level of precision?
 
drkitten,

Are analog computers Turing machines, or only to an arbitrary level of precision?

I dealt with that at length earlier. If you assume that Heisenberg's uncertainty principle implies a limit on precision of representation, then a sufficiently powerful analog computer is a Turing machine. If you assume that magic pixes are willing to duct-tape Heisenberg's mouth shut and throw him in the trunk of a large black sedan, then they can be (formalized to be) more powerful than Turing machines.
 
So in other words, "we don't understand cognition. We don't understand QM, either. QM appears magical, so maybe it will give us the magical non-computational properties we need to produce something more powerful than a TM."

So the general opinion is that "more powerful than a TM" is not a realistic option. Which leaves, for lack of anything better, "equivalent to a TM" as literally the only candidate left standing.

Why would quantum coherence allow something more powerful than a TM? Any quantum dynamics can be simulated on a classical computer, although it might take an extremely large number of computations.

And in fact there's no evidence I'm aware of that quantum computers can solve problems much faster than classical ones. That is, there are various algorithms that provide polynomial and sub-exponential speed-ups, but none that (say) solve an NP-complete problem in polynomial time.
 
Why would quantum coherence allow something more powerful than a TM?

Because we (by which read, Penrose and his followers) don't understand Quantum Mechanics. Any sufficiently badly understood science is indistinguishable from magic (with appropriate apologies to Mr Clarke).

Any quantum dynamics can be simulated on a classical computer, although it might take an extremely large number of computations.

And in fact there's no evidence I'm aware of that quantum computers can solve problems much faster than classical ones. That is, there are various algorithms that provide polynomial and sub-exponential speed-ups, but none that (say) solve an NP-complete problem in polynomial time.

.... which sort of supports what I've been saying about Penrose all along. People read him and appreciate him through the gaps of their own background; the physicists find his neuroanatomical musings and his mathematical theories interesting, while discounting his obvious misunderstanding of QM. Mathematicians find his misinterpretations of Goedel appalling but appreciate his ability to tie QM to neurobiology. And neurobiologists know that his theory of microtubules is gibberish, but think that he may be on to something with his mathematical treatment.

ETA: You can see what I'm talking about even in the criticism I cited. The author of that piece is a biologist, not a physicist (Berkeley Vision Sciences Dept.) He -- Dr. Klein -- doesn't know enough about QM to say whether or not "quantum coherence" would allow a violation of the Goedelian argument, or even whether nor not QM would allow quantum computers to "solve problems much faster than classical ones." (Indeed, he accepts Penrose's argument that they can in section 2.2 of his full paper.) He merely points out that even if Penrose's physical claims are true, they don't support the neurobiological claims he wishes to make.

Now you, in your capacity as an expert physicist, point out that his physical claims are also not true. I'm not sure how much you know about the biology of microtubules,.... but I'll bet not a lot, because it's not your field. So you are focusing on the part YOU know and finding it wanting.

And, indeed, despite the fact that Dr. Klein finds Penrose' argument to be total garbage, he still can't avoid finding it compelling : "Even though we are not driven to quantum mechanics because of an inability of classical mechanics to deal with neural activity, it is still worth exploring Penrose's quantum ideas. For one thing, there is the intriguing link between quantum mechanics and the role of the observer." I assume I don't need to tell a professional physicist such as yourself about the sort of nonsense that gets pushed under THAT heading.....
 
Last edited:
.... which sort of supports what I've been saying about Penrose all along.

I once saw him give a seminar on a subject I know very well (and he ought to as well). It was crackpot nonsense. Odd, since he did brilliant work in the past. Maybe his brain is decohering...

On this topic, is there any need to postulate that human brains are much more powerful than one would expect if they operate classically? Can we (very roughly) estimate the rate at which the human brain processes information and compare it to a classical computer of (very roughly) the same size?

Having thought for two minutes: take a task humans ought to be very good at, such as face or voice recognition. I'd say it takes someone some not-too-tiny fraction of a second to recognize someone. How long would it take a well-programed modern computer to do the same? Probably about the same time, or less. Human brains have ~100 billion neurons. Modern CPUs have 1 trillion transistors.

So.... is there some other argument for needing to posit some kind of mysterious quantum speedup in the human brain?
 
Last edited:
I once saw him give a seminar on a subject I know very well (and he ought to as well). It was crackpot nonsense. Odd, since he did brilliant work in the past.

... which, I think, is part of the issue.

There are two possible explanations when you attend a seminar that is crackpot nonsense. One is that the speaker isn't as smart as he thinks he is. The second is that you aren't as smart as you think YOU are.

I heard the late psychologist Amos Tversky described once as an IQ test. "The faster you realize Amos is smarter than you, the smarter you are."

Given this type of preparation, I'd be more inclined to believe that I had misunderstood Amos than that he had delivered a seminar full of errant nonsense. And a lot of people feel like that about Penrose, for the same reason.
 
So.... is there some other argument for needing to posit some kind of mysterious quantum speedup in the human brain?

Well, yes. A lot of the anti-computational arguments are based on the idea that there is something fundamentally different about computation in meat and computation in metal. In Penrose's case, he uses Goedel's theorem as the basis for that -- there are some theorems that any given computer program could never prove, but presumably human mathematicians could. Hence, human mathematicians are different.

Once you've identified that they are different, you need to find the cause for the difference. And since we know that neural computation is TM equivalent, and we know that massively parallel TMs are equivalent to serial TMs, that leaves,.... what?

You need to find some process that humans have and machines don't, or vice versa. Quantum-scale microtubules seems as good a collection of buzzwords as any.
 
Given this type of preparation, I'd be more inclined to believe that I had misunderstood Amos than that he had delivered a seminar full of errant nonsense. And a lot of people feel like that about Penrose, for the same reason.

That's probably a wise attitude (wisdom not being an attribute I possess in any great abundance). But in this case there was an audience of experts, some rather accomplished, and I believe they all shared my opinion (although I doubt they would express it so strongly).

In Penrose's case, he uses Goedel's theorem as the basis for that -- there are some theorems that any given computer program could never prove, but presumably human mathematicians could.

But... they couldn't, right? Isn't that precisely what Goedel proved - that there are unprovable statements in any interesting logical system?

Or are you just reproducing Penrose's wrong "logic"?
 
But... they couldn't, right? Isn't that precisely what Goedel proved - that there are unprovable statements in any interesting logical system?

Or are you just reproducing Penrose's wrong "logic"?

I'm reproducing Penrose's wrong logic.

Goedel's theorem, stated with excruciating formality, states that any sufficiently powerful formal system must be either incomplete or inconsistent. Less formally, this means that any formal system must either miss some truths, or make some mistakes in its proofs.

Since computers are formal systems, this means that there are some truths any individual computer will miss, or else it will make some mistakes and prove some things incorrectly.

That much is unassailable.

Penrose wants to conclude that humans, however, are not formal systems. And since we know that humans never miss anything and and never make mistakes, this clearly proves that humans are not formal systems and are more powerful than computers. And at this point Ms. Rational Discussion slips out of the lecture hall to powder her nose, because it's obvious that she won't be needed for a while.

Having said that, two different formal systems can have two different sets of unprovable truths; there may be some truths that can't be proved in system A, but can in system B, and vice versa. But that's not really a relevant objection next to the huge whopper implicit in the paragraph immediately above.
 
I'm reproducing Penrose's wrong logic.

Goedel's theorem, stated with excruciating formality, states that any sufficiently powerful formal system must be either incomplete or inconsistent. Less formally, this means that any formal system must either miss some truths, or make some mistakes in its proofs.

Since computers are formal systems, this means that there are some truths any individual computer will miss, or else it will make some mistakes and prove some things incorrectly.

That much is unassailable.

Penrose wants to conclude that humans, however, are not formal systems. And since we know that humans never miss anything and and never make mistakes, this clearly proves that humans are not formal systems and are more powerful than computers. And at this point Ms. Rational Discussion slips out of the lecture hall to powder her nose, because it's obvious that she won't be needed for a while.

Having said that, two different formal systems can have two different sets of unprovable truths; there may be some truths that can't be proved in system A, but can in system B, and vice versa. But that's not really a relevant objection next to the huge whopper implicit in the paragraph immediately above.

This is what I have always found incredible in his work (incredible as in "wtf is he thinking?").

If I remember correctly, his argument is dependent upon this idea of a "perfect" human mathematician that is able to not only be fully aware of it's own brain state but also fully aware of the Godelian incompleteness that said system would run up against were it only TM equivalent, but since this perfect mathematician is able to generate Godel sentences -- seemingly without end, in any formal system it is aware of -- it must be somehow more powerful than a TM.

For those smart enough in logic, a fallacy appears (it is famous, btw, just google "lucas penrose fallacy"), but before I even realized that I was saying to myself "where the heck can we find such a human mathematician?"

I think it is quite absurd that Penrose needs to reference a non-existent perfect human in order to argue the case of the average joe being more than a TM.
 
But... they couldn't, right? Isn't that precisely what Goedel proved - that there are unprovable statements in any interesting logical system?

Or are you just reproducing Penrose's wrong "logic"?

I think that is exactly what is called the lucas-penrose fallacy. And I also think it is simply the result of being ignorant towards a very famous bit of knowledge called the halting problem.

Because Penrose (and I think Lucas) seem to hold that humans do not suffer from the halting problem, I.E. we can determine whether any program will halt, whereas normal TM equivalent computers can only determine whether specific programs will halt.

And, as you might imagine, this is used in the context of "Humans can generate Godel sentences for every formal system." Penrose and Lucas think that is a true statement, although I am not aware of any proof. In particular, I am of the opinion that humans cannot generate -- and more importantly, grok -- Godel sentences for their system. Penrose thinks using a perfect mathematician that is fully aware of itself (in a formal sense) solves the problem, but I don't agree.

Hofstadter talks about exactly this fallacy in a chapter of GEB.

Note that I have always thought the Epimenides paradox was a human system godel sentence that we can't grok, but I know drkitten, yy2bggs, and others (generally, everyone who is way smarter than I am) disagree. I still don't understand why I am wrong, though. Maybe these smart people will get tired of arguing with people that don't want to learn and spend time on me instead lol.
 
Last edited:
Regarding the slowed robot.... ;)

I still have some problems getting my mind around certain aspects of the situation, and I'm hoping that the computer/info sci folks here will help me resolve them.

(I thought I had resolved the "green triangle" thought experiment from a biological frame of reference, but I've hit a snag there, perhaps illusory, and I need to attack it again, but I won't bring that up here yet.)

The basic framework I'm knocking around in is the one summarized by drkitten earlier:

By assumption, the robot is conscious
Since the robot is conscious, the computer that is the robot's brain is conscious
Since the computer is a TM, the TM that is the robot's brain is conscious
Since a TM can be single-stepped without loss of functionality, the conscious robot can be single-stepped without lost of functionality.

I asked if we can be sure that the brain is essentiallly a TM.

drkitten's response, in part:

There are really only three alternatives. The first is that the brain is actually less powerful than a TM, which wouldn't really mean much, because we could still build a conscious TM. So we can dismiss that as equivalent to choice #2, the brain is as powerful as a TM.

Choice #3, the brain is more powerful than a TM, suffers from the fundamental problem that we have no way to envision or formalize anything that is more powerful than a TM that doesn't involve an explicit reference to something magical like an "oracle" or the ability to code data finer than the Plank length. While these may not be explicitly impossible, we've never been able to come up with anything at all credible that provides a physical instantiation of such things.

My questions:

What is meant by "more powerful"? Does that simply mean that it can do things that a TM can't?

If so, why would something magical be required to say that it can do something a TM can't?

After all, the heart and the liver can do something a TM can't, as can a hammer, so why is it that we assume the brain cannot?


When we say that "a TM can be single-stepped without loss of functionality", what exactly are we talking about?

Are we talking about only the lowest-level outputs? If so, does it necessarily follow that there can be no loss of functionality for higher-level tasks?

For instance, a computer that plays digital movies loses the ability to "play movies" when running very slowly, even though it performs all its calculations. A computer that is used to produce a laser does not, I presume, work properly when slowed down drastically.

Must it be true that all higher-level functions can be performed by a TM running at any operating speed?


Thanks.
 
There doesn't appear to be any such neuron.

Rather, what must happen for Joe to be aware of seeing the triangle and to report this experience is for his brain to format the visual data -- which includes receiving a sufficiently long real-time signal, matching the data with stored patterns so that it's understood as a "green triangle", and deeming it important enough to flag as meriting conscious attention -- and make it available to conscious processing.

Then we agree on that part, the signals and memories are processed, and as a result, the value of the "I'm aware"-neuron is changed from 0 to 1. (Of course it's not one single neuron, but the current state of many neurons.)
The brain then, in its regular coordination of chunked data from various modules to produce conscious awareness, produces the conscious experience of seeing a green triangle.
I don't follow on the "produce the conscious experience" part. Joe's conscious experience with the green triangle starts at the moment the neurons are in the right states. If he was paused one single step before the computations came to their (then inevitable) conclusion, in this state he wouldn't yet be conscious. But if he was stepped one more step forward, he would.

A human brain could not be single stepped and neither could the beat of a heart because they depend on "external clocks" like temperature, chemical reactions, the lifespan of signals before, etc.

But in our simulation, everything is slowed down. If consciousness doesn't survive below a certain speed limit, doesn't that imply a mysterious external clock, existing outside of the simulation?
 
Then we agree on that part, the signals and memories are processed, and as a result, the value of the "I'm aware"-neuron is changed from 0 to 1. (Of course it's not one single neuron, but the current state of many neurons.)

I'm sorry, I don't understand the analogy. It seems unnecessarily confusing to me because, of course, we can speak of individual neurons, so to use the analogy of an "I'm aware" neuron just complicates the discussion on a topic that's already very hard to discuss.

I think it's more accurate to say that Joe's brain momentarily is in a state of conscious awareness of the event.

I don't follow on the "produce the conscious experience" part. Joe's conscious experience with the green triangle starts at the moment the neurons are in the right states. If he was paused one single step before the computations came to their (then inevitable) conclusion, in this state he wouldn't yet be conscious. But if he was stepped one more step forward, he would.

Which neurons?

Joe's conscious experience of the green triangle lasts as long as the brain processes which generate conscious experience are coordinating the sets of data that correspond to "I see a green triangle".

So we have to ask ourselves, how does the brain perform that coordination, and can it perform that task at any and all operating speeds?

A human brain could not be single stepped and neither could the beat of a heart because they depend on "external clocks" like temperature, chemical reactions, the lifespan of signals before, etc.

But in our simulation, everything is slowed down. If consciousness doesn't survive below a certain speed limit, doesn't that imply a mysterious external clock, existing outside of the simulation?

It might. In which case we may have to conclude that "Joe does not report seeing a green triangle" is totally bogus.

However, we must keep in mind that we're dealing with physical processes operating in physical space.

If we slow down the whole shebang, we're dealing with roger's example of time dilation, in which case I think everyone must conclude that nothing changes.

But if we're talking about slowing down a processor within the framework of a nonchanging external temporal referent, then we have to ask ourselves whether the high-level tasks can be maintained in that environment, even if the low-level outputs are the same.

I gave the example above of the high-level task "play a movie" and the task of powering a laser. Slowing the operating speed while the external timeframe remains unchanged does cause these tasks to stall.
 

Back
Top Bottom