• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Robot consciousness

A system of buckets and pulleys cannot make a cell divide because it doesn't possess the means.

Right. Unless there was a definition of cell division that involved buckets and pulleys. There isn't -- a "cell" is a very specifc biological entity.

Similarly, a system of buckets and pulleys cannot generate consciousness because it doesn't possess the means.

Wait, what? Why not? Who says? What kind of reasoning did you use to reach this conclusion, eh?

Was it something like "People are conscious, people are not made of buckets and pulleys, therefore buckets and pulleys can't generate consciousness?"

Do you see the logical flaw in that reasoning?

Intelligent educated people define consciousness in terms of behavior. If a system behaves in a certain way, it is conscious. Certainly, if it thinks it is conscious, and can communicate with you, it is going to be conscious. The only reason to think otherwise is some apriori religious delusion.

So the question becomes, can a system of buckets and pulleys behave in such a way? Well, why not? Can you give me a single logically sound reason why a system of buckets and pulleys could not 1) think it was conscious and 2) communicate this to you?

If you're proposing that consciousness merely arises as a kind of side effect of the non-conscious activity of the brain, &/o that it does not require any specialized biophysical activity -- the way all other physical activities do, such as blinking, sweating, or shivering -- which is not computation (although it may be simulated on computers) then you are making an entirely unfounded assertion..

No, I am proposing that nobody who actually knows what they are talking about defines consciousness in a way that restricts it to any specific system. The only requirement to being conscious is a certain type of behavior. Contrast this with the requirements for cell division which, in particular, involve biological cells.

There is absolutely no reason to believe that this is true, and quite a few reasons to believe that it is not, but rather that the production of consciousness is not achieved by pure computation, but rather in the same way that other bodily functions are performed, by biophysical means.

You just don't get it. There is no such thing as pure computation. The term doesn't mean what you think it means.
 
All right, I'm done.

But before I go I just want to point out, yet again, that the claims of those who disagree with me have been supported by precisely zero citations of observational or experimental evidence.

Zero.

None.

Nil.

Zilch.

The notion that consciousness is generated from non-conscious processing, or from identical processes despite being a qualitatively different phenomenon, is based 100% on assumption.

No evidence has been presented that it is true.

On the other hand, I have provided citations of evidence which indicate that the generation and maintenance of conscious experience is a physiological activity which, like all other physiological activiies -- and indeed, all other real-world, real-time events -- requires some specific physical mechanism to support it.

The computationalists insist -- absent any supporting evidence -- that the only physical activity necessary to produce consciousness is the activity required to allow the computation to run. This is tantamount to claiming that the computation itself generates consciousness.

When you get down to it, we're talking nothing short of a violation of the most basic laws of physics.

And yet, they cite no evidence for this claim whatsoever, but refer only to their own hypothesis which claims that it is true.

And on the basis of their utterly unsupported hypothesis, they insist that we must accept absurd conclusions, such as consciousness being instantiated by people working out code on pen and paper.

Then they accuse me of being arrogant.

Well, so be it. None so blind, you know.

If no evidence in support of their hypothesis has been offered by now, I expect that none is to be had.

In response to the studies I've offered, so far it seems one person has bothered to even look at them, and his response was, well, he wasn't convinced, and yet he could not say why.

Enjoy the Kool-Aid, folks.

AMF.

This equates to saying "people who say that computers are the result of computation must be wrong because you can't find computation in a computer without there being a computer for the computation to occur in."

wtf?

That you seem to honestly be stuck on the fact that human brains are physical things and you think computation is somehow not necessarily a physical thing betrays just how little you actualy know about this subject.

All your citations have nothing to do with anything, piggy, and even if they did they all support the computational model. Oh, didn't you know that? Every single reliable psychological finding in history supports the computational model.

So we aren't "ignoring" them, it just so happens that they aren't relevant like you think they are. You can throw all the psychological evidence in the world at us and it doesn't change things in the world of Alan Turing and Alonzo Church one bit.
 
First, we're not discussing simulations, but rather a conscious machine.
Same thing, but if it makes you happy, we could start using the term "virtual brains".

Second, all the evidence indicates that "being aware" is a biophysical activity of the brain which is distinct from other types of brain activity.
Yes, I used the concept of an "awareness unit" in the virtual brain in order to make it clear that it might be different. Just in what way it is different is of no importance, since we are talking pure theory, and we only need to know that the "awareness" is still a TM.
 
Also, we should note that our conscious awareness has a locatable physical instantiation, although not necessarily a precise one. Our feet seem to be below the area where conscious awareness is taking place. For that matter, so does the jaw.

The forward boundary is somewhere around the eyes.

The upper boundary is somewhere around the top of the skull. When we put a hand on top of the head, the boundary seems to retreat somewhat, giving the impression that it's below the hand.

This in itself is an indication that it's likely that we're looking at a biophysical activity of the physical organ of the brain.


Did anyone else get the sense that Piggy can actually feel the conscious boundary of his body?

If consciousness is an abstract field that surounds the organ responsible for generating the field, how are we ever going to reduce this to pure calculation?
 
Did anyone else get the sense that Piggy can actually feel the conscious boundary of his body?

If consciousness is an abstract field that surounds the organ responsible for generating the field, how are we ever going to reduce this to pure calculation?
I think that's exactly what he's saying.

When I stub my toe, I sure as hell know that my consciousness doesn't stop at my brain stem.
 
This is tantamount to claiming that the computation itself generates consciousness.

When you get down to it, we're talking nothing short of a violation of the most basic laws of physics.

So Piggy's as bad at physics as he is at cognitive science. I almost wish he hadn't left so I could ask him which laws of physics the consciousness model violates.


I guess when you can't slam the door and stomp away on the Internet, you resort to forum-censor-thwarting abbreviations for the Spanish-English bastardization of "Go with God, you people who fornicate with women who have born children."

How rude.
 
I guess when you can't slam the door and stomp away on the Internet, you resort to forum-censor-thwarting abbreviations for the Spanish-English bastardization of "Go with God, you people who fornicate with women who have born children."

How rude.

This is how threads with piggy often end, so don't sweat it.
 
This is how threads with piggy often end, so don't sweat it.

That's cool. I won't. :)

It's still weird how on a skeptical forum, you find someone who stubbornly refuses to read stuff because he's already made up his mind that it's BS. And that estimation apparently comes from the fact that it involves philosophy.
 
I'd like to continue the discussion of the limits of what consciousness is. Trying to extract a definition of consciousness from human consciousness is fraught with the difficulty of separating out the other aspects of the human mind such as intelligence and symbolic processing. Can we come up with a more basic understanding of what it means to say that an entity is conscious?

What are the lower limits of consciousness? Is self awareness a necessary component of consciousness or is simple awareness of some external stimuli sufficient? Would self awareness alone be sufficient?

Why is it that a smart thermostat that tracks inside and outside temperatures and learns to predict the effects of outside changes so it can maintain the internal temperature in an acceptable range based on feedback it receives from manual adjustment inputs would not classify as conscious?

Can there be a level of consciousness above or simply different from our own such that it is not readily apparent to us but would fit under a broader definition?
 
I'd like to continue the discussion of the limits of what consciousness is. Trying to extract a definition of consciousness from human consciousness is fraught with the difficulty of separating out the other aspects of the human mind such as intelligence and symbolic processing. Can we come up with a more basic understanding of what it means to say that an entity is conscious?

What are the lower limits of consciousness? Is self awareness a necessary component of consciousness or is simple awareness of some external stimuli sufficient? Would self awareness alone be sufficient?

Why is it that a smart thermostat that tracks inside and outside temperatures and learns to predict the effects of outside changes so it can maintain the internal temperature in an acceptable range based on feedback it receives from manual adjustment inputs would not classify as conscious?

Can there be a level of consciousness above or simply different from our own such that it is not readily apparent to us but would fit under a broader definition?

I would say start another thread. These questions are different enough from the OP that they are probably considered off-topic.

I'll definitely participate if you'll start it.
 
I'd like to continue the discussion of the limits of what consciousness is. Trying to extract a definition of consciousness from human consciousness is fraught with the difficulty of separating out the other aspects of the human mind such as intelligence and symbolic processing. Can we come up with a more basic understanding of what it means to say that an entity is conscious?

Absolutely. The problem is that most people have ulterior motives for being interested in this issue, even if they don't know it, and any definition they use is polluted accordingly.

For example, it would be quite acceptable to say that consciousness == human consciousness, as exhibited by a normal human (you and I). But this would disqualify many mentally handicapped individuals, as well as all animals by definition. So then where do you draw the line? I have asked this of many people -- great apes are surely conscious, but what about dogs? What about chipmunks? Birds? Fish?

And that is why knowledgeable people simply rely upon a purely behavioral definition. Of course there are varying definitions even in that sense, but at least people agree to disagree. For example, PixyMisa has a higher threshold than I do -- I happen to feel that when any computation takes place at all there is consciousness (and in that regard the very term "consciousness" is redundant in a formal sense as far as I am concerned), whereas Pixy puts the threshold at self referential behavior. And many put the limit still higher at fully self aware behavior.

But we all know that it is behavior in question and that each other's definitions are simply arbitrary thresholds so there is no time wasted bickering. I don't discuss with Pixy "what are the data structures that are necessary for consciousness," I discuss "what are the data structures necessary for a specific behavior, call it what you wish." And that kind of discussion is actually fruitful, mainly because it is objective and obvious when there is an error.

What are the lower limits of consciousness? Is self awareness a necessary component of consciousness or is simple awareness of some external stimuli sufficient? Would self awareness alone be sufficient?

... and here you see why it is better to speak in terms of behavior.

It is trivially obvious that self awareness is required to exhibit self aware behavior, that is really all we can say.

Another point is that an entity doesn't get to self awareness without also having non-self awareness. That is, you are either just "aware" in an indescriminate fashion, or you are aware of both self and non-self. This is a very common error -- most theists make it, in fact, by presupposing that knowledge of self is somehow the most fundamental type of knowledge.

Why is it that a smart thermostat that tracks inside and outside temperatures and learns to predict the effects of outside changes so it can maintain the internal temperature in an acceptable range based on feedback it receives from manual adjustment inputs would not classify as conscious?

That all depends on the definition, and as before, you see why it is pointless to even ask. I happen to think such a device is conscious. Pixy does as well. Many do not. But does that mean anything? Absolutely not.

What everyone agrees upon is that a thermostat does not behave like a human. Many people with an axe to grind, however, regurgitate this strawman that since people like me think a thermostat is conscious we also think it must have feelings and be able to love other thermostats and blah blah blah. wtf? Why would I think that?

Can there be a level of consciousness above or simply different from our own such that it is not readily apparent to us but would fit under a broader definition?

Absolutely. But it is not as simple as looking at the pattern of creatures below us and forming a conclusion by inductive reasoning. You get the answer by asking another question -- "what kinds of things exist that we humans don't grok?"

And in case you aren't familiar with that word, grokking is merely knowing by intuition, the way you know that you are a creature in a 3D world, the way you know how to walk, the way you know simple arithmetic, etc.

At the very least, we do not possess the fullest level of self awareness possible -- not only can we not be aware of neural activity in our own brain (yet), but we wouldn't be aware of how that activity fits into the universe mathematically even if we were.

And that latter point can be considered independently as well -- we are creatures of 3 dimensions, and can only perceive the 4th as time. If there are others, they are well beyond us grokking them. And it is possible there is a way to grok the 4th dimension as a simple extension of the 3 we know and love rather than a fundamentally different "thing."

We are also notoriously bad at mathematics -- we can grok simple arithmetic, but beyond that we require multiple passes of recursion to understand concepts. The best mathematicians probably already have an additional level of awareness that people like you or I can't understand, and it lets them grasp mathematical concepts without having to slowly recurse on paper like everyone else does. Does it make their normal experience different? Probably not. But they are able to look at math and see a world we are blind to.

In my own experience, I know I have a very intuitive awareness of computer science that I did not have 10 years ago in high school. I can look at a video game I am working on and just know what is going on behind the scenes. I can see a behavior on screen and just grok which values of which word in memory (in an abstract sense, of course) is causing it to happen. And I am sure most professionals have a similar ability.

The most important idea to gleam from all of this, in my opinion, is that additional awareness is learned. There is nothing inherent. Sure, some types of awareness might require certain biological hardware to be present, and that might be inherited, but everything else is learned. So I don't see why humans won't bootstrap themselves into higher levels of awareness as the technology gets better.
 
I'd like to continue the discussion of the limits of what consciousness is.
Me too.

Trying to extract a definition of consciousness from human consciousness is fraught with the difficulty of separating out the other aspects of the human mind such as intelligence and symbolic processing. Can we come up with a more basic understanding of what it means to say that an entity is conscious?
I have seen that many simply equate all of that.
Sure, the argument is that consciousness does need good computing powers and handle on symbols for "me" and "world". But it doesn't seem equivalent to me.

What are the lower limits of consciousness? Is self awareness a necessary component of consciousness or is simple awareness of some external stimuli sufficient? Would self awareness alone be sufficient?
I could be here on thin ice...
Is self-awareness different than consciousness at all? I am having hard time to see the difference. Or at best I could see that consciousness is just the description of the current state (as in she sleeps), while self-awareness is intrinsic.

Based on this you could guess my view on that smart thermostat. It is not conscious at all.

....
After 10 min of thinking.
There is no way to awoid human and animal models.
I tried to ask myself about babies. They are clearly not self-aware up to some age (what is the current best guess?). Yet I know whether the baby is conscious or not.
OK, so maybe we can take out self-awareness from consciousness.
 
We are also notoriously bad at mathematics -- we can grok simple arithmetic, but beyond that we require multiple passes of recursion to understand concepts. The best mathematicians probably already have an additional level of awareness that people like you or I can't understand, and it lets them grasp mathematical concepts without having to slowly recurse on paper like everyone else does. Does it make their normal experience different? Probably not. But they are able to look at math and see a world we are blind to.
Very interesting view. I like it.
So we posses self-grokness :):)

Logical games seem to me better model for these considerations. I remember begginings in some games. The problem was that the table with those figures just seemed so opaque. Then after a lot of playing the game is just tranparent. So I become aware of that world? Could work.
I would also add expirience with those games, when I grok them. It did affect my normal experience. The normal expirience of outer world is very diminished. Almoust non consciousness...

The question I would ask is do we need recursive processes here? Is it that things that don't require recursion are simply trivial to learn?
 
Ok, so no new thread.

I'd like to continue the discussion of the limits of what consciousness is. Trying to extract a definition of consciousness from human consciousness is fraught with the difficulty of separating out the other aspects of the human mind such as intelligence and symbolic processing.

Ya, definitely. Candidate definitions of consciousness typically include one or more of the folowing:

Self awareness -- something has a model of itself (and possibly its environment) it uses to regulate its own behavior. Problem words here: "has", "model", "regulate", "behavior"

Sentience -- something is able to have subjective experiences (sometimes called qualia). There is a "what it's like" to be the thing. The thing "knows" what it's like to see red, and this knowing is something distinct from simply being able to identify a particular smear of longish wavelengths.

Candidate definitions may also include Intentionality -- consciousness has an object. Consciousness is "about" something. i.e.: "I am conscious of the taste of peanut butter in my mouth" or "I am aware of hunger", "I am thinking of a number between 1 and 10".

I need to cut this short for now, but I'll post more later (I hope).
 
The question I would ask is do we need recursive processes here? Is it that things that don't require recursion are simply trivial to learn?

We have to be careful to stipulate that by "recursion" we mean simply "steps in a process of inference, steps that seem somewhat recursive in nature," rather than recursion in the formal theoretical sense.

That is, to understand a mathematics theorem of even moderate complexity, I myself need to write things down and study each step of the process over and over, until eventually I see how to get from point A to point B.

So yes I would say things that don't require multiple steps of inference are simply trivial to learn, at least given the hardware configuration of the human brain. Some things that are trivial for us are probably not trivial at all for other creatures, and vice versa.

Furthermore note that an intelligent entity can bootstrap new experience using what it has already learned. A smart mathematician likely learns a new theory much faster than you or I since they have seen similar things before, etc.
 
And in case you aren't familiar with that word, grokking is merely knowing by intuition, the way you know that you are a creature in a 3D world, the way you know how to walk, the way you know simple arithmetic, etc.

<snip>

The most important idea to gleam from all of this, in my opinion, is that additional awareness is learned. There is nothing inherent. Sure, some types of awareness might require certain biological hardware to be present, and that might be inherited, but everything else is learned. So I don't see why humans won't bootstrap themselves into higher levels of awareness as the technology gets better.

Maybe this awareness is related to something called "chunking" or (from a friend of mine) "Zenning it". It's when you are no longer explicitly thinking through every step, or every possibility of a process or problem. In some sense, you offload some of the processing into a specialized mental faculty.

For instance, master chess players apparently no longer think in terms of moves, but in terms of board configurations. A board will look "thicker" in some places, and "thinner" in others, maybe where lines of attack and defense converge. The overall image of the board is what these players are aware of, and not the move-per-move sequences. They will understand how a move or set of moves will change the configuration of the board.

In another case, martial artists gain a "muscle memory" for certain movements, and offload the split-second decision making to the "reptile brain" (maybe, not sure) where movements are more reflexive rather than deliberate.

Even driving, after you've done it a while, becomes "second nature". You are no longer having to think about coordinating your eyes, hands, and feet. You've "chunked" driving, allowing you to talk, eat, chew gum, listen to music, etc, while doing it.

Maybe these aren't new awarenesses that you learn, but ways of freeing up conscious processing time for more important, difficult, or novel things.
 
Just grokked something!

Chunking seems to occur when you consciously or subconsciously develop a dependable set of heuristics or algorithms you can use to process the information, such that the act of traversing these mental pathways is no longer novel to the mind, and doesn't set off the "I need to be aware of this" trigger. It also explains why it's just as difficult to unlearn things as it is to learn them. You have to make yourself recognize when you're traveling down a comfortable (if maybe incorrect) mental path.
 
Yep, you are spot on.

Incidentally, the amount and complexity of such processes that we can "offload" into subconscious pathways sort of flies in the face of many traditional notions, not the least of which is free will.
 
I guess I should have noted that perhaps the greatest difficulty in defining general consciousness is that we all (presumably) have personal experience with human consciousness.


Though I can imagine subsets of consciousness such as self-awareness with external stimuli cut off or individuals that loose their self-awareness through brain injury but remain conscious of the outside world, are these really separate functions or just a natural development of the conscious model that recognizes the self as something that is always there?


Does chunking actually remove processing from consciousness? An alternative view could be that once we stop questioning the process and simply accept the result, the process itself is executed in one pass in a subliminal moment. The process doesn't last long enough to form a memory of its execution.

On the other end, we have developed symbolic representations that allow us to offload intermediate results of conscious processing to external media such as symbols on paper. Though clearly consciousness exists without the external media, do the symbols remain part of the consciousness when written externally?
 
You guys still on about this? I just returned from two weeks at the beach, so my brain is disengaged.

I did read about half of Permutation City and I'm certainly enjoying it.

~~ Paul
 

Back
Top Bottom