• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness



If I remember correctly, the scooch-a-moot people used one certification exclusively for their scooch-a-moot stuff, and that was rabbit's weakness.

But another clue in support of my theory is the number of people helping the A.I., which suggests to me that the A.I. was actually just the global consciousness of all those people to begin with. And of course the higher an individual was in the rankings, the more contribution they could make.

At first I wanted to think that rabbit was in fact a separate A.I. but the more I read the more it seemed like he was just the figurehead of a huge human network.

Damn. I will have to again read that library competition thing. It was soo boring ;( ;( ;(
 
Well in the end the Library scene is one of those weird sci-fi group trance/orgasm/zen experiences, and that was a big clue to me. After all, you could kind of describe our own consciousness in those terms from the standpoint of an individual neuron.
 
I'll try again: how do you decide in your own mind, among your own actions which to label as clearly conscious or as clearly not?

I'll answer again:

Piggy said:
Well, pretty much my best guess at what has a similar brain.

Dogs, pigs, dolphins, elephants, cats, horses, gorillas, chimps, I think all these must be conscious. Seems to me their brains will be similar enough to ours, and will need to do tasks similar enough to what ours have to do, that they're bound to be conscious.

Insects, I don't think so. Consciousness requires a lot of resources, and I don't see them having the necessary structures to make it happen.

Where the line is... damned if I know.
 
I am going to just ignore the rest of that mish mash of contradiction you call a response and focus on this last statement of yours.

Because I think it illustrates how fundamental your misunderstanding of this issue really is. Why should a system of buckets and pulleys make a cell divide? Really, why? What makes you say something like this?

What makes you think that the failure of buckets and pulleys to make a cell divide -- because obviously they cannot -- means anything at all in the context of this discussion?

Quite simple.

A system of buckets and pulleys cannot make a cell divide because it doesn't possess the means.

Similarly, a system of buckets and pulleys cannot generate consciousness because it doesn't possess the means.

If you're proposing that consciousness merely arises as a kind of side effect of the non-conscious activity of the brain, &/o that it does not require any specialized biophysical activity -- the way all other physical activities do, such as blinking, sweating, or shivering -- which is not computation (although it may be simulated on computers) then you are making an entirely unfounded assertion.

There is absolutely no reason to believe that this is true, and quite a few reasons to believe that it is not, but rather that the production of consciousness is not achieved by pure computation, but rather in the same way that other bodily functions are performed, by biophysical means.
 
This poses no problems whatsoever for a computer simulation of the brain. Remember: the whole system is slowed down, so the inputs that simulate visual stimuli are also slowed accordingly. Apparently, processing the stimuli reaches the stage where they are identified and, but not yet sent to the awareness unit before new conflicting stimuli have been received, which end up with another image that is eventually handled by the awareness unit, relegating the first picture to be what we call "subliminal".

There is no reason to believe that "awareness" is the only necessary part of a simulation of the brain's functions. We just focus on a distinction between "awareness" and the rest because "awareness" is the only part we can actually notice when we are conscious.

Two comments.

First, we're not discussing simulations, but rather a conscious machine.

Second, all the evidence indicates that "being aware" is a biophysical activity of the brain which is distinct from other types of brain activity.
 
I'll answer again:

I'm asking how you judge your own conscious mind and actions, not that/those of another human, animal, or robot. If you're not clear about yourself, you have no hope of it judging it in others.

You answered "has a similar brain", which wasn't the question. I'm just asking your opinion about actions of your own brain.

So, how do you decide what in your own mind and among your own actions to label as clearly conscious or as clearly not?
 
We've been trying to give a nutshell version. You keep rejecting the nutshell version because you misunderstand it--which is (forgive me) understandable. The CM (or Computational Theory of Mind, CTM, if you feel like looking it up) is not in itself that hard to understand, but it relies on LOTS of work done before. And it's pretty clear you don't have that foundational knowledge--the Church-Turing Thesis, the concept of "computability" from mathematics, the notions of Multiple Realizability and functionalism from computer science and cognitive science. I doubt anyone here has time to give you a primer course on all of this. I keep hoping you'll get interested enough to look it up for yourself, but it seems you are stuck on your pet theory to the point you won't even entertain other possibilities.

If you said "you know, I've read the literature about CM, and I have a good grasp of what it says, and I'm just not convinced" then I would be thrilled to continue this debate. As it is, this isn't interesting any more.

There are competing theories out there. There are real criticisms of CM that are difficult to meet. But your criticism is not among them, because it essentially amounts to "Nuh-uh, that's wrong." You are merely contradicting the central conclusion of CM, but without finding a weakness in the logic or the premises upon which it rests. That's why so many of us are just stunned at the intellectual arrogance you exhibit when you argue against a theory you obviously do not understand and seem to have no interest in trying to understand.

Until you educate yourself about current AI research (bare neurology doesn't count), arguing with you is futile.

And until you educate yourself about consciousness, arguing with you appears to be futile.

You seem to have convinced yourself that my intention is to refute the computational model of consciousness. It is not.

But if someone says that they have a theory which demonstrates that the earth is flat like a pancake rather than round like and orange, and you say, "Oh, you mean it can be modeled like that?", and they say, "No, it's like that in real space", you're not inclined to go read up on that theory before pointing out the obvious arguments against the conclusions they are drawing.

Again, I have absolutely no interest in arguing against any computational model of consciousness.

But if the claim is being made here -- which it certainly seems to be -- that a conscious machine must be able to maintain consciousness at any operating speed because TMs can do calculations at any speed, and that it's possible (hypothetically) for an abbey of monks with quill pens to generate consciousness by executing code by hand, then what I'm interested in is pointing out how such claims do not comport with the available evidence.

I think I've pinpointed now why these claims don't make sense -- they appear (from what y'all are saying here) to be predicated on an unfounded and unsupported assumption that consciousness can be generated without a dedicated physical mechanism for that purpose, simply through the implementation of the kinds of processes analogous (or identical) to non-conscious processing in the brain.

There appears to be no direct observational or experimental evidence to support this notion -- or I assume you would have provided it, and keeping in mind that a theory cannot be supported by that same theory -- and quite a bit of evidence contradicting it.

Your habit of continually referring me to your theory is merely begging the question.
 
You missed the one I listed earlier in this thread: The interactions of humans and computers on a global internet form a level of consciousness that is unrecognized by the individual components.

Odd definition of consciousness you have there.
 
This more than anything seals the deal that I won't be continuing the debate. Your arrogance is breathtaking.

My arrogance?

This from someone who claims to be able to make assertions about consciousness based on a hypothesis that has never actually produced consciousness, and who brushes off contradictions between that hypothesis and direct observation and experimentation by making reference to the hypothesis itself?

Amazing.
 
What are you trying to say? Has anybody claimed that logic could be implemented without a physical substrate? Pencil and paper is the physical substrate in the OP.

I mentioned this point in my post.

The claim being made -- and I believe I'm reading it correctly, since it's been repeated several times now -- is that when non-conscious processing is being performed on the brain's neuronal substrate, then either the generation of consciousness arises from that without any additional biophysical action, or it is produced by an identical process, despite being qualitatively distinct.

Neither version of the claim is supported by any observational or experimental evidence whatsoever -- if it were, then some would have been produced -- and there is evidence which leads to the conclusion that this is not true.

Now don't get me wrong. I have no doubt that a concious machine can be built. The brain is just a biological machine, after all.

But if the claim is being made -- which it appears to be -- that the real-world, real-time phenomenon of conscious awareness is caused by computation, and that the only physical substrate and configuration that is required is the same type which is required for non-conscious processes in the brain, then I must point out that such a claim is not supported by any evidence (hypotheses cannot be their own evidence) and is put into serious doubt by actual observation of the only known object that produces consiousness.

As for playing the DVD, there is input, there is output and there is logic in between. The input usually comes from a laser bouncing off the spinning disk, the output is usually displayed on a monitor screen and the logic is usually provided a fast computer algorithm or custom logic chips.

But none of that electronic kit is necessary. The input could be provided by reading the pits on the disk with a microscope, the output could be sheets of paper colored by crayon and the logic provided by a room of monks using pencils and paper.

Your DVD analogy proves the OP's contention that the logic can be slowed down to any speed.

I've already conceded that the logic can be performed at any speed.

No one is arguing that it can't.

The question is, will the movie play?

A: No.

We're not talking about a computer that simulates the activity of the brain. We're talking about a conscious machine, one that actually performs the actions necessary to be conscious.

The difference between me and those who disagree with me appears to come down, at the end of the day, to a simple question of whether consciousness is generated by the same types of processes which perform the non-conscious actions of the brain -- or perhaps arises as a side-effect of these processes -- or whether it requires an additional biophysical activity that is not the kind of thing a TM does, that is to say, the kind of biophysical activity that the body uses to do things like blinking, sweating, and shivering, and everything else the body does.

Their hypothesis appears to be based on pure assumption and to be unsupported by any observational or experimental evidence whatsoever.
 
I'm asking how you judge your own conscious mind and actions, not that/those of another human, animal, or robot. If you're not clear about yourself, you have no hope of it judging it in others.

You answered "has a similar brain", which wasn't the question. I'm just asking your opinion about actions of your own brain.

So, how do you decide what in your own mind and among your own actions to label as clearly conscious or as clearly not?

Oh, sorry I misunderstood.

Pretty simple, really. When I become aware of it, it's conscious.

I'll give you an example.

Last night I worked very late, came home, and went to bed.

As I lay in bed, it occurred to me that I had left an item on the spreadsheet that should not have been there. I got up and emailed 2 people to alert them to that fact because I knew they would see the spreadsheet before I got in this morning.

Now, obviously, I didn't consciously choose to realize that I'd made a mistake half an hour ago.

Apparently, my brain was busy doing what it does, sorting out the events of the day and building up its stores of patterns and memories, all non-consciously.

When it got around to that bit, enough strong associations were triggered that it met the threshold for conscious attention, and the whole pre-processed ball of wax ("I left an item in the spreadsheet that should have been deleted") was served up to the devices in my brain that control conscious experience.

I then became aware of (remembered) what I'd done earlier.

That's really all there is to it. If I know anything about it, then it's been fed into conscious processing by the non-conscious processors.
 
Oh, sorry I misunderstood.

Pretty simple, really. When I become aware of it, it's conscious.

...
I then became aware of (remembered) what I'd done earlier.

That's really all there is to it. If I know anything about it, then it's been fed into conscious processing by the non-conscious processors.
So if you can remember a piece of information, you label it as conscious. How does that differ from what I've been claiming?

And why would a robot remembering something it had done earlier not be a clear indication of its consciousness?
 
Odd definition of consciousness you have there.

Your view of consciousness requiring a physical action to create the spark reminded me of something Douglas Adams said: "... for all other life forms out there, the secret is to bang the rocks together".


When you stop dodging the question of what consciousness is we may be able to make progress in this thread.
 
Your view of consciousness requiring a physical action to create the spark reminded me of something Douglas Adams said: "... for all other life forms out there, the secret is to bang the rocks together".


When you stop dodging the question of what consciousness is we may be able to make progress in this thread.

You must be kidding me.

Has it escaped your notice that I'm the only one so far to have cited any actual studies on consciousness?

Unlike those who call me arrogant, I don't rely on an as-yet unproductive hypothesis and tell folks to go read up on it. I've actually cited studies about the organ which produces consciousness and discussed those studies.

The view of consciousness asserted by the other side appears to be based entirely on the bare assumption that the same physical processes which are responsible for non-conscious activities of the brain are also responsible for the phenomenon of conscious experience. If there were any evidence to support such an assumption, it should have been provided by now.

But so far all we've seen are references to the hypothesis itself.

At this point, to any skeptics lurking on this thread, enough red flags should be flying to make you think you were in a May Day parade.

To begin (but not end) with our own experience of consciousnes, we can observe that it is distinct from non-conscious processes. I've already mentioned the cocktail party effect, for example.

Also, we should note that our conscious awareness has a locatable physical instantiation, although not necessarily a precise one. Our feet seem to be below the area where conscious awareness is taking place. For that matter, so does the jaw.

The forward boundary is somewhere around the eyes.

The upper boundary is somewhere around the top of the skull. When we put a hand on top of the head, the boundary seems to retreat somewhat, giving the impression that it's below the hand.

This in itself is an indication that it's likely that we're looking at a biophysical activity of the physical organ of the brain.

And in fact, since it is a fairly simple matter to distinguish between the phenomenon of conscious experience, on the one hand, and the bulk of brain activity of which we're not consciously aware, on the other -- such as regulating our breathing, for example -- it makes very little sense to assume that these different functions must rely on identical biological processes.

When we look into the experimental evidence, we find further support.

The subliminal studies which I have cited (where are the counter-evidentiary studies?) clearly and unequivocally draw a line between non-conscious processing and conscious experience, and irrefutably demonstrate that the brain processes, stores, and uses information that is not made available to those brain functions which generate conscious awareness.

The split-brain studies and blindsight studies and decision studies go further. (All of which have been linked on this thread, by the way.)

In the blindsight experiments, we see that the brain can (and does) process information enabling the body to navigate an obstacle course before feeding any of that information to the structures responsible for generating conscious awareness.

In the split brain studies, subjects with a severed corpus collosum can feel an object, out of sight, with one hand and have no awareness of what the object is. (It's not that they know but can't say -- which is typical of certain aphasias -- it's that they are not aware of what the object is.) Yet they can point to a picture of that object among pictures of other objects. When asked why they pointed to that picture, they have no idea.

Obviously, the brain is doing one heckuva lot of work non-consciously. Conscious awareness comes after. It's an add-on. It's a distinct function. The decision studies (which I've also cited) bear this out.

To cap it all off, we now have a study done on subjects with deep brain probes, which demonstrates clearly that the brain initiates biophysical processes when consciously aware of objects in the environment that it does not initiate when it is not consciously aware of those objects. (I have cited that, as well.)

And these are not neuronal processes. The researchers involved in that study confirm a trend that has been the trajectory in biology for a while now -- the abandonment of a neuronal model of consciousness.

And yet the computationalists continue with this talk of consciousness being produces by the execution of code.

It's time for them to pony up with some experimental verification of their outlandish claims, which seem to violate the most fundamental laws of physics.

Conscious awareness is something the body does. As such, it requires physical action to initiate and sustain. The current experimental evidence indicates that it is not initiated and sustained by the same processes which support non-conscious processes in the brain.

And if anyone wants to refute that, they need to cite some evidence, not yammer on about how naysayers need to understand their hypothesis.
 
So if you can remember a piece of information, you label it as conscious. How does that differ from what I've been claiming?

And why would a robot remembering something it had done earlier not be a clear indication of its consciousness?

The brain can recall and use information for decision making without that information ever being made available to conscious experience. The subliminal studies I've cited confirm that.

So recall is not the litmus test. Awareness is.

And yes, if we have a robot that can be aware of its surroundings, and aware of certain memories, of course it's conscious.

What's your point?
 
The brain can recall and use information for decision making without that information ever being made available to conscious experience. The subliminal studies I've cited confirm that.

So recall is not the litmus test. Awareness is.

And yes, if we have a robot that can be aware of its surroundings, and aware of certain memories, of course it's conscious.

What's your point?
Well that's progress. Now we have to look at how others come to know about this conscious experience. You reported it by writing here, and presumedly you'd allow that a robot reporting here would be an equal indication of its conscious experience. If you couldn't report conscious experience unless your were conscious, then I wouldn't expect you to say that a robot could either.

My point is that this report of consciousness can be used by others to make their determination. And if it's a program running in a robot that produces the report, then that program is single-steppable, meaning that the production of consciousness must also be single-steppable.
 
Last edited:
Well that's progress. Now we have to look at how others come to know about this conscious experience. You reported it by writing here, and presumedly you'd allow that a robot reporting here would be an equal indication of its conscious experience. If you couldn't report conscious experience unless your were conscious, then I wouldn't expect you to say that a robot could either.

My point is that this report of consciousness can be used by others to make their determination. And if it's a program running in a robot that produces the report, then that program is single-steppable, meaning that the production of consciousness must also be single-steppable.

Oh, good God.

Give me a break.

This rests on the assumption that we cannot program machines to make false reports.

Need I say more?
 
Well that's progress. Now we have to look at how others come to know about this conscious experience. You reported it by writing here, and presumedly you'd allow that a robot reporting here would be an equal indication of its conscious experience. If you couldn't report conscious experience unless your were conscious, then I wouldn't expect you to say that a robot could either.

My point is that this report of consciousness can be used by others to make their determination. And if it's a program running in a robot that produces the report, then that program is single-steppable, meaning that the production of consciousness must also be single-steppable.

Look, you pony up with real research, or you don't.

You demonstrate that some evidence exists that a single-stepped program has produced consciousness, or you don't.

I'm getting goddam tired of the philosophical baloney.

Deal with reality, or admit you can't.
 
Oh, good God.

Give me a break.

This rests on the assumption that we cannot program machines to make false reports.

Need I say more?
What would you see as a false report in this case? Remember, you'd get the exact same report of it recalling its experience whether the robot was running the program continuously or single-stepping it.
 

Back
Top Bottom