Robot consciousness

shuttIt said:
But if I'm asleep, surely it would be acceptable usage of the word to say that I am unconscious.
In non-REM sleep, yes. In REM sleep I'd say it was just an altered state of the same conscious processes you exhibit when you're awake.

We agree that this can't be done.
Maybe we do. Does John Edward? He claims to talk to disembodied conscious entities all the time. If we could actually locate such an entity, it would disprove the claim that only the brain can generate consciousness.

Nowhere is it written that everything has to be accessible to scientific inquiry. It seems plausible to me that consciousness (in the specific sense I mean it), what caused the big bang and the like are unknowable. If we are going to impose the constraint that they ARE knowable and then reason from there, we should at least admit that this is a pragmatic assumption and could be wrong.
Of course it could be wrong, but that's the only approach we have. We just keep hammering away at neuroscience until we feel we understand consciousness or reach a dead end.

However, I know for sure that there are some people who will say that we only ever discover the neural correlates of consciousness, no matter how much progress we make. It's those people who have an unfalsifiable hypothesis. (Not saying you're one of those people.)

At the risk of rambling, it just seems to me that the fact that I am not a reasoning meat automata, and do in fact have an inner 'I' is not something that one would have predicted from anything we have learned about physics, chemistry or biology.
Sorry, now you're making an unwarranted assumption. I think you're just a reasoning meat automata. Your inner "I" is a clever illusion pieces together from a hundred different processes because it was evolutionary advantageous to do so.

~~ Paul
 
We agree that the mechanism bears no resemblance. I see no reason why both mechanisms cannot produce consciousness, since they are computationally equivalent (modulo any real-time-sensitive processes).

Computation is, of course, an abstract description of what the brain is doing physically.

If we create a physical circuit-brain which can also be described by the same kind of abstractions -- in other words, which is also doing what the brain is doing -- then maybe it can be conscious.

But when we get to the situation of a person with a pencil at a table who is thinking through these abstractions in his head, and using the pencil to help him along, then we're in different territory altogether.

(This is distinct from roger's TM example above)

In that case, there is no substantiation of what the abstractions symbolize.
 
In short, you can't move into a plat. You can't haul a ton of freight across a river by driving over the blueprint of a bridge.
 
Piggy said:
Consciousness arises from the physical activity of the brain.
Ah well, if you are going to define consciousness as going on in a brain, that does eliminate any other form of consciousness.

Why would you do that?

~~ Paul
 
Ah well, if you are going to define consciousness as going on in a brain, that does eliminate any other form of consciousness.

Why would you do that?

That's not an exclusionary statement.

It's true that consciousness arises from the physical activity of the human brain, and almost certainly other animal brains, as well.

That doesn't mean it can't arise from other types of "brains".

But if anyone is going to suggest some radically different type of consciousness, then they are going to have to explain how it is created and maintained.
 
Piggy said:
roger, thank you for post 267. An excellent post. I'll have to get to it tonight.
Agreed.

I'll say up front that the TM you're describing is indeed qualitatively different from the pencil-pushing human.
I don't see how. One of the first large applications I wrote was an industrial-strength Turing machine simulator for use by computer science courses. Arbitrary-sized tape, save and restore, assembler for TM programs, the whole deal. I wrote it in PL/I back in the days of batch runs, where I was lucky to get two runs per day. I spent a lot of time hand-simulating the program to uncover bugs without wasting runs. My hand simulation appeared in every way to be equivalent to running the program.

~~ Paul
 
Piggy said:
In short, you can't move into a plat. You can't haul a ton of freight across a river by driving over the blueprint of a bridge.
But, as you agreed before, consciousness is a process and not a thing. There is no physical freight to move.

~~ Paul
 
If you have time for an eccentric view on consciousness, try Sir Roger Penrose's The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Penrose is a brilliant mathematician who tried to tackle consciousness from that starting point. A bit like a plumber doing his best to explain a computer.
I'm going to do something I'm not proud of, and slag a book I haven't read. I read the reviews of this book when it came out (such as in the NYT Review of Books, which did several pages on it), and it just didn't thrill me. Admittedly, it touches on what we are talking about here, as he proceeds from the assumption that thought is nonalgorithmic, and thus not implementable by a UTM. Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.

It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, but I can't bring myself to read pure speculation, especially in the face of so much evidence that neurons and the networks they form are computable.

I do like Pinker (I went to see him speak recently, btw), and I'll definitely check out Damasio, whom I've never read.
 
roger said:
Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.
Don't miss the Tegmark/Penrose/Hameroff debate:

http://space.mit.edu/home/tegmark/brain.html

It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, ...
Just an observation to no one in particular: Algorithmic does not mean constrained by logic. The brain may construe a false statement as true. [Marvin Minsky]

~~ Paul
 
Roger, slag away because I agree with your review of Penrose-Emperor's New Mind. I called it eccentric, but could have just said weird and unfounded. Penrose may have had little choice other than to deal with consciousness from his own discipline. The plumber tells you how a computer works. Things did get better with his follow-up called Shadows of the Mind: A Search for the Missing Science of Consciousness. I got halfway through and just never started it up again. It sits with about 20 others that are not finished.

I had the opportunity to meet Pinker in 2000 and really like the man as a person. I think he is going to remain a primary figure in this field (cognitive neuroscience) for a long time, especially considering that he is young.
 
I'm going to do something I'm not proud of, and slag a book I haven't read. I read the reviews of this book when it came out (such as in the NYT Review of Books, which did several pages on it), and it just didn't thrill me. Admittedly, it touches on what we are talking about here, as he proceeds from the assumption that thought is nonalgorithmic, and thus not implementable by a UTM. Problem is, he does that without a shred of evidence, and goes on to speculate on a QM brain implementation, again without any evidence.

It'd be astonishingly interesting if it turned out the brain was nonalgorithmic, but I can't bring myself to read pure speculation, especially in the face of so much evidence that neurons and the networks they form are computable.

I do like Pinker (I went to see him speak recently, btw), and I'll definitely check out Damasio, whom I've never read.

You might also check out Gazzaniga's stuff.
 
ETA: Ignore this post: I hit submit instead of preview. I'm not done. Will submit a complete post downthread. -Piggy

roger:

Having read post 267, I can now see where the gaps are in our communication on this -- or some of them, at least.

If I had known you were making such grand assumptions about the computational theory of the mind (CTM), I would not have said that I agree with you.

I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.

I would wager -- in fact, I would wager quite a bit, at very high odds -- that although it is a "good enough" model for current investigations, it will turn out to have significantly limited explanatory power down the road.

My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a sample critique from 2006, for example.)

You said I'd get the Nobel Prize if I could prove that something in the brain is not computational, and certainly I'd get some kind of prize if I could do that. But not because I would be refuting anything that has supposedly been established. Rather, it would be because I settled an open question.

You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either. Like the daisies, it is a purely physical, specifically biological, system interacting with the material world.

And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.

You compare neurons to transistors, but that comparison doesn't quite fit.

Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation.

In fact, it was recently discovered that a highly regular heartbeat is bad news, because highly regular heartbeats are prone to fibrillation. Some random variation in heartbeat pattern is a good thing -- it means your heart can absorb shocks and return to its natural operational state. If it gets too rigid, it can too easily get knocked into the alternate contractive pattern that will kill you.
 
Piggy, you completely misunderstand. When the others are talking about a pencil and paper brain, they are not saying the pencil and paper are the brain. They are saying that the pencil and paper are the engine that manifests the constructs of the universe in which the brain resides just as we think of reality as the engine that manifests matter and energy of our universe. And just as we have no way to see what our reality is beyond deducing the rules by which it operates, a brain constructed in the universe created by pencil and paper would have no way to see the pencil and paper. It would think and feel exactly as we do if the rules of its universe were the same as ours. In fact, apart from the absurdity of it, there is no experiment that we could perform to prove that we are not in the pencil and paper universe.
 
roger:

Having read post 267, I can now see where the gaps are in our communication on this -- or some of them, at least.

If I had known you were making such grand assumptions about the computational theory of the mind (CTM), I would not have said that I agree with you.

I certainly agree that it is extremely useful to model the mind, and neurons, in that way, and that tremendous strides are being made with that model in what everyone must admit are our early explorations of brain function. But in no way has a broad-based CTM been proven. Not even close.

I would wager -- in fact, I would wager quite a bit, at very high odds -- that although it is a "good enough" model for current investigations, it will turn out to have significantly limited explanatory power down the road.

My field, language, is one area in particular where CTM has not yet provided as robust an explanatory framework as we might hope. (Here's a sample critique from 2006, for example.)

You might be interested, btw, in Raymond Tallis's "Why the Mind is Not a Computer". It's rather thin, both in scope and in hard information on the brain, but it's an interesting examination (a la Dennett and Pinker -- though he would disagree with Pinker certainly on the topic of CTM) of how our own brains may have fallen victim to the associative nature of language in carrying over spurious assumptions when describing the brain in computational terms.

You said I'd get the Nobel Prize if I could prove that something in the brain is not computational, and certainly I'd get some kind of prize if I could do that. But not because I would be refuting anything that has supposedly been established. Rather, it would be because I settled an open question.

You said that my analogy with the daisies was "inapt because there is no programming controlling the swaying". What you forget is that there is no "programming" in the brain, either. Like the daisies, it is a purely physical, specifically biological, system interacting with the material world. But it is one which we know generates consciousness, whereas daisies do not.

And recent studies into biological systems and how they behave and evolve give us reason to doubt that purely computational systems could evolve in biological specimens.

You compare neurons to transistors, but that comparison doesn't quite fit.

Biological systems that are very rigid, like transistors, are very bad at absorbing shocks. They are fragile. Biological systems tend to evolve with wiggle-room. They are fuzzy. Like the heart. It can take a good bit of knocking around before it goes into fibrillation.

In fact, it was recently discovered that a highly regular heartbeat is bad news, because highly regular heartbeats are prone to fibrillation. Some random variation in heartbeat pattern is a good thing -- it means your heart can absorb shocks and return to its natural operational state. If it gets too rigid, it can too easily get knocked into the alternate contractive pattern that will kill you.

And although it is very useful to model neurons computationally, transistors they are not.

As you probably know, a simple model of a neuron consists of a synapse where the neuron picks up neurotransmitters (NTs) from adjoining neurons. When a sufficient number of NT molecules bombard the neuron, it reaches its threshold and fires, sending a signal down its length and releasing its own NTs into the next synapse. It then re-collects the NT molecules.

We can model this set-up computationally, even writing a simple program with values for n (the number of NT molecules to meet the threshold), an increment and decrement to bring the value of f (fire) from 0 to 1 and back down to 0, etc.

And that's extremely useful.

But it's an idealization.

The biological reality is messier, more fluid, more open, with all sorts of other variables around it, and there's reason to believe that it has to be that way in a real-world evolved biological system.

So we cannot be certain, and we have good reason to doubt, that neurons actually are purely computational components, even though it is useful to model them that way at this stage of our investigation of the brain.

And as we scale up to less granular levels of organization, this same kind of fuzziness persists. In its real-time operations, the brain deals with all sorts of competing associative impulses, and very often makes mistakes by accepting the incorrect one (although even here computational models have proven useful, by describing the accepted association in terms of the number of "hits" -- in other words, the more numerous the associations, the more likely it is that the brain will choose that option, even if those associations have nothing to do with the task at hand).

This is why I have serious doubts that a cog brain would work like a neural brain. Cogs are quite rigid, and not very handy with the kind of threshold-based workings that we see in the brain. Maybe there's a way to reproduce this with cogs, though, I don't know. But I'd have to see it to accept that it's possible. Maybe, but maybe not.

Can all the workings of the brain be performed by a TM? Right now, there's no reason to accept the assertion, and some very compelling evidence to make us doubt that it will turn out to be the case.

So, after all that (and ignoring the pencil-brain thing, which has turned out be a red herring and now I see why) let's look again at the speed question.

If we had a robot brain which, by whatever method, worked like a human brain -- because we have no other model to use -- what would happen to its consciousness if we slowed down the rate of data-transfer between its basic components (the equivalent of neurons)?

First, we cannot assume that this brain is some sort of TM. That would be jumping the gun.

Instead, we must assume it works like a human brain, which may be some sort of TM equivalent, but maybe (I'd say most probably) not.

Well, obviously if we slow down the rate to zero, there's no consciousness. (That's why I kept bringing that up -- not because I thought you were arguing otherwise, but as one end of a continuum.)

Somewhere between natural brain-speed and 0, then, there's a point where consciousness is not sustainable. Is it at 0?

No, it can't be. It must be higher than 0. Why? Because from cases like Marvin's, and more recent research demonstrating that we act on stimuli before we are consciously aware of them, we see that consciousness is a specialized downstream process involving the coordination of highly processed information. Because the phenomenon of conscious awareness requires coordination of coherent aggregate data, and because neural impulses are ephemeral, there must be a point higher than 0 at which coherence is insufficient to maintain conscious awareness.

Right now, no one knows what it is, but because we know that neurons fire very quickly, it's safe to assume that a rate of 1 impulse per second would be too slow.

Can we build a robot brain that would accept that rate?

I doubt it, because you'd have to accept a kind of flickering consciousness, like a movie run frame by frame.

But that wouldn't work, either, for reasons that another poster mentioned above.

Consciousness is not a point-in-time phenomenon. It's smeared out over time. There's a kind of Heisenberg effect to it, with associations being continually wrangled and forced into service. It's not like a camera lens, opening to the world and letting the light in.

So I can't see an impluse-per-second rate being high enough, even though no one can say, at this moment, what the floor would be.
 
Piggy, you completely misunderstand. When the others are talking about a pencil and paper brain, they are not saying the pencil and paper are the brain. They are saying that the pencil and paper are the engine that manifests the constructs of the universe in which the brain resides just as we think of reality as the engine that manifests matter and energy of our universe. And just as we have no way to see what our reality is beyond deducing the rules by which it operates, a brain constructed in the universe created by pencil and paper would have no way to see the pencil and paper. It would think and feel exactly as we do if the rules of its universe were the same as ours. In fact, apart from the absurdity of it, there is no experiment that we could perform to prove that we are not in the pencil and paper universe.

No, I understand that.
 
An experiment on linguistic perception related to the role of data coherence in conscious experience.

It seems to be the convergence of these measures in a late time window (after 300 ms), rather than the mere presence of any single one of them, which best characterizes conscious trials. "The present work suggests that, rather than hoping for a putative unique marker – the neural correlate of consciousness – a more mature view of conscious processing should consider that it relates to a brain-scale distributed pattern of coherent brain activation," explained neuroscientist Lionel Naccache, one of the authors of the paper.

The late ignition of a state of long distance coherence demonstrated here during conscious access is in line with the Global Workspace Theory, proposed by Stanislas Dehaene, Jean-Pierre Changeux, and Lionel Naccache.
 
Last edited:
Here are a couple of articles that explore some very advanced uses of computational models with great success.

If more advances are made in this direction, the CTM may indeed win the day.

Neural Networks Help Unravel Complexity Of Self-awareness


Two Brains, One Thought: Wiring Diagrams Of A Neuronal Network Based On Its Dynamics

But if a version of CTM proves accurate, consciousness is still a very high-level function, and neural impulses (the building blocks of all high-level functions) are still ephemeral. Therefore, I don't see how we can get around the conclusion that there must be a floor for impulse speed below which consciousness is unsustainable.
 
Regarding "flickering" consciousness:

We actually have real-world examples of flickering consciousness.

When the brain is tired, it will take micronaps.

You might have experienced these while driving. You jerk awake and you're a few yards down the road. Probably, you get a huge rush of adrenaline.

It can happen at your desk, or just about anywhere.

From your POV, you're not aware of the gaps. Your awareness seems continuous; you just have these lost moments of time.

So obviously, it's possible for awareness to "flicker" to a certain extent.

The question for this thread would be how short the conscious spans can be and how long the gaps can be.

But that said, the brain is still running at speed during all of this, so it's not necessarily correlative.
 

Back
Top Bottom