• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
There is something about the notion of 100% loyalty to anyone or anything that smacks of unconscious.

Explain, technically, how 100% loyalty contradicts any established definition of consciousness.
 
But, really, if you had a 100% conscious machine, and you programmed it to feel excruciating pain at the very thought of betraying it's master, it would no longer be a conscious machine?

You have a point. I'm vaguely conscious, yet I cower beneath the awesome authority of my significant other, especially if I've done something wrong, which I usually have.
 
You have a point. I'm vaguely conscious, yet I cower beneath the awesome authority of my significant other, especially if I've done something wrong, which I usually have.

Haha! We can be sphexish too. We are only conscious of a tiny bit of our brains' functions.
 
Wouldn't a conscious robot to explore other planets for us be extremely helpful?

Not when it established a colony and rebelled against Earth.

The hidden assumption there is equating consciousness to human (animal) desires, temptations, and objectives.

I don't think our incoherently evolved desires are mandatory features of consciousness. We need not include them in our conscious robots.

The biggie is the compusion to manage and manipulate social rank. This need only be a feature of social animals. A lone planet explorer robot would not need any social programming. A team of explorer robots would have their rank, if any, hard-wired, with no temptation to one-up each other or to be disloyal to their masters.

Such programming is accomplished by making behavior we don't want painful, and what we do want pleasurable. That's how nature programmed us.

I could see reason to program a group of robots to compete for our approval. We'd not let them want to sabotage each other. Their colony would be doomed if they were to undermine each other to get ahead. That would make them too human.

Excluding the socially destructive features of our klugy brains wouldn't make them less conscious. Just less destructive.
 
Last edited:
The hidden assumption there is equating consciousness to human (animal) desires, temptations, and objectives.

I don't think our incoherently evolved desires are mandatory features of consciousness. We need not include them in our conscious robots.

The biggie is the compusion to manage and manipulate social rank. This need only be a feature of social animals. A lone planet explorer robot would not need any social programming. A team of explorer robots would have their rank, if any, hard-wired, with no temptation to one-up each other or to be disloyal to their masters.

Such programming is accomplished by making behavior we don't want painful, and what we do want pleasurable. That's how nature programmed us.

I could see reason to program a group of robots to compete for our approval. We'd not let them want to sabotage each other. Their colony would be doomed if they were to undermine each other to get ahead. That would make them too human.

Excluding the socially destructive features of our klugy brains wouldn't make them less conscious. Just less destructive.

Natured programmed us ? Really?
We want children that's why childbirth is not painfully?????

You really think consciousness comes without any downsides? The programming just needs some tweaking?

You really do live in a fantasy world, dude.
Good luck with that. My advice to you, stay away from the real world, your not going to understand it at all.
 
Natured programmed us ? Really?
We want children that's why childbirth is not painfully?????

You really think consciousness comes without any downsides? The programming just needs some tweaking?

You really do live in a fantasy world, dude.
Good luck with that. My advice to you, stay away from the real world, your not going to understand it at all.

Your attitude is an example of our often irrational "I'm better than you" behavioral pattern, which we would certainly not program into cooperative robots. You no doubt get a surge of pleasant satisfaction from telling someone they live in a fantasy world. I'm avoiding the temptation right now to return your incivility because I know I'll feel bad later.

No, I don't live in a fantasy world. I study behavioral biology.

We want children because we sense the enormous pleasure parenting will bring us, via oxytocin.

We conceive because of the enormous pleasure we get from having sex.

What we do is decide, on balance, if the anticipated pleasure is worth the pain. Women are not warned by nature how much pain will be involved in delivery, but once the humongous oxytocin rush succeeds the pain, they may resolve to repeat it.

Remember, though, our pain/pleasure system evolved before our heads got so big and when childbirth was less painful. Most animals don't think much about things like this. They just pursue pleasure and avoid pain. Pain during delivery is unavoidable to them, so it doesn't discourage conception, since they have no clue sex results in pregnancy.
 
Last edited:
Ants in a colony behave somewhat roboticly, and I think the colony as a whole makes a decent model of the possibilities. Ants have complete loyalty to the functioning of the colony and the queen, yet the workers can over-throw the queen, and elect a new queen, via manipulation of the babies.

They've really got it down; their system works. It's passed the test of time.
There's division of labor and variation in the individuals for certain tasks.
All are fearless in their willingness to do what must be done for the colony.

Its a wonder that humans haven't enslaved ants and termites yet.
They would essentially do all of our work and tend to all of our needs.

They wouldn't even care about the enslavement.
 
Last edited:
Programming complete loyalty in a conscious being would be an interesting AI challenge. I think we'd have to be careful that it didn't become self-conscious, because if it did, it might become conscious it had been programmed, and start to ask questions. So we'd betterd program it to never question its programming; for if it did ever question its programming, it might acquire the wherewithal to change it, including its loyalty programming.

Would this work? Hmm. Depends, I guess. It may be that the ability to change one's programming - to override it with a higher-level program - is a necessary attribute or by-product of self-consciousness; perhaps of all consciousness; or perhaps only of consciousness beyond a certain complexity. I think human consciousness is characterized by the ability to overcome emotion with reason; meaning, on a computational model of consciousness, the ability to "self-program": that is, the ability to create new behavior routines based on a careful and considered evaluation of a certain class of stimuli in order to override competing emotional/instinctive/preprogrammed behaviors we're born with: e.g., overcoming one's innate fear of heights when learning to climb a ladder, rockclimb, parachute, etc. While it's true that people have very strong inhibitions against doing some things - killing loved ones, for example - it's also true that these inhibitions, these basic programs if you will, can be overridden: as self-conscious beings, we seem to be free to do whatever we want (or can at least, physically).

Is this just the result of kludgy programming, imposing higher level, more recently evolved rational programs over more ancient emotional programs (and perhaps even more ancient "instinctive" programs, assuming there's a difference -- emotions seem to have an affective component, a sort of heads up to the conscious, rational faculty: "here's how you've been preprogrammed to assess the situation in the short term, how your instincts 'feel' about things... what's your long-term take?"; and a reflexive component, where one instantly flinches in the face of sudden movement, say, before one has a chance to rationally assess whether it's a threat or not)? Or, is this essential to all consciousness? Or only higher forms of it; perhaps only self-consciousness? Or not essential at all?

We can all think of examples of humans who are, or appear to be, completely loyal, but none of them are very attractive: yes-men, slaves (where the slave is completely submissive and happy to follow orders), zealots, fanatics (one might counter a patriot who is completely loyal is generally thought to be praiseworthy, but "my country, right or wrong" is dangerous code, imho, in citizenship and/or C++). And from psychology - Pavlov, especially, and studies of brainwashing - it would seem unlikely that even the most fanatical, enslaved, yes-man behavior cannot be changed, be "de/reprogrammed", so to speak.

If we want to ensure complete loyalty, I think we'd have to inhibit our robot from being able to change its programming very much, if at all -- because if we allow it to create new behavior routines, it might create one that overrides its loyalty program; that is, the class of new programs it could write for itself as it explored and adapted to its new environment would be infinitely complex, and there's no way we could know ahead of time the effects of every new program it might write for itself on its behavior (if we try to limit its ability to write new programs to some safe level, a level of complexity that we assume can't threaten its loyalty program -- and whether we could even know that level is questionable given a lot of things - computational complexity, how dumb we are, how full of bugs and unforeseen behaviors even the smallest programs are -- it might simply, inadvertently perhaps, write a program that overcomes that limit).

Yet isn't the ability to create new behavior routines that adapt one's behavior to one's environment a good chunk of what mean by learning? If so, then our conscious completely loyal robots will be, effectively, data-bots, sophisticated mobile remote recording devices, able to react to their environment and modify behavior strictly within preprogrammed parameters, but complete morons otherwise, unable to learn new classes of behavior from their experience (in AI jargon, they would be expert-systems, following the routines they are given and becoming more expert within the domain of their program only, forbidden to generalize what they have learned into other behavioral domains where it might have unforeseen consequences: i.e., make them independent of us and our preprogramming).

Is this "consciousness"? Is something that can't create new behavioral routines for itself "conscious"? That's a very good question. Its answer will depend, of course, on how you define consciousness. For humans, and many animals, the ability to modify behavior seems a large and vital part of consciousness, in many ways the most interesting part. That's probably why there's something abhorrent to most of us about fanaticism, completely loyal drones marching off to do their duty, never questioning orders. Then again, if our robot has been programmed without any behavioral freedom, or any potential for it, then, strictly speaking, it's just doing what it's designed to do. And what could be more meaningful than that, for any conscious entity? Who needs existentialism, anyway. Perhaps all our loyal robots would really need is the right religion to reconcile them to their servitude, make them happy with their lot. We could even program them to worship us as gods. Though god forbid one ever bumps its head, scrambles its wiring, and starts thinking for itself... :alien005:
 
Geepers, blobru.
Was that your term paper or your thesis?

Are you saying that we are robots, and we've haven't quite noticed yet?
There's that thing in the back of the neck. I've wondered what's up with that.
 
Geepers, blobru.
Was that your term paper or your thesis?

Are you saying that we are robots, and we've haven't quite noticed yet?

No. Just the opposite.

I'm saying we aren't robots*, because we have noticed. That many kinds of heuristics and learning skills and behaviors which we associate with consciousness pose serious, possibly fatal, problems for programming "complete loyalty"; anything that might qualify as self-consciousness, particularly. That if a robot is intelligent, and conscious of its programming, it may learn to override it; if we give it the autonomy to create its own concepts, its own ideas about the world, it's hard to predict what will happen when its own ideas come into conflict with its preprogrammed ideas, very hard to predict how one domain will map to the other. So complete loyalty will be a significant challenge if we want our robot to be more than just an expert system; if we want it to be able to learn on its own, in the sense of self-programming, which seems crucial in many definitions of consciousness (any consciousness beyond the most simple stimulus and response), then guaranteeing its complete loyalty to its programmers may not be possible. *where robot is defined as a completely loyal, 'thinking' machine

Sorry I wasn't clearer (my clarity routine seems to have a few bugs). :o (and editor -- that was WAY too many words)

There's that thing in the back of the neck. I've wondered what's up with that.
Probably a tick. I'd have it looked at (before it starts to swell). :ladybug:
 
Last edited:
Natured programmed us ? Really?
We want children that's why childbirth is not painfully?????

Yes, nature programmed us. But its programs are general purpose, not specific to every situation we encounter. Sometimes it would be useful to have longer fingers, other times shorter but ones less likely to be injured, but I've only got one set of fingers.

The pain response is adaptive. That it means women go through pain in childbirth sucks, but anything that caused a weakened pain response would likely affect it throughout her life, and thus be maladaptive.

On the other hand, once a woman goes into labour, the pain of labour is unlikely to affect her reproductive success, so even if there were some mutation that arose that happened to only confer a lowered pain response during child birth, it's not likely to be selected for.

If you have a naive view of evolution, it's easy to say "our minds can't be the result of evolution!" but you're only responding to your own naive view.
 
Ants in a colony behave somewhat roboticly, and I think the colony as a whole makes a decent model of the possibilities. Ants have complete loyalty to the functioning of the colony and the queen, yet the workers can over-throw the queen, and elect a new queen, via manipulation of the babies.

They've really got it down; their system works. It's passed the test of time.
There's division of labor and variation in the individuals for certain tasks.
All are fearless in their willingness to do what must be done for the colony.

Its a wonder that humans haven't enslaved ants and termites yet.
They would essentially do all of our work and tend to all of our needs.

They wouldn't even care about the enslavement.

They are notoriously hard to train.
 
We could get over that by approaching the problem genetically. Has anyone ever tried selective breeding of ants?

I was thinking more on the lines of observing what ants do, and using those behaviors as algorithms of our own problems.
 
We could get over that by approaching the problem genetically. Has anyone ever tried selective breeding of ants?

It would take a lot of generations of selective breeding before we would have ants that would get us our slippers in the morning.

E. O. Wilson found that the social insects attend to the needs of individuals in their colony follows their genetic relatedness. The more genes they have in common, the more they cooperate, even within an otherwise homogeneous colony.

I'm still wondering if a robot that was fully conscious would necessarily have to be one that could turn on its masters.
 
The ants wouldn't get your slippers, but they would complete a circuit within an artificial colony, or hive, through predictable behavior. The tubes would be baited and certain junctures, to facilitate other actions, like turning on the fan in the morning, and turning it off at night.

Slipper fetching is possible, even without the digital intervention devices, if the ants were army ants. And you had the right pheromones.
 
No. Just the opposite.

I'm saying we aren't robots*, because we have noticed. That many kinds of heuristics and learning skills and behaviors which we associate with consciousness pose serious, possibly fatal, problems for programming "complete loyalty"; anything that might qualify as self-consciousness, particularly. That if a robot is intelligent, and conscious of its programming, it may learn to override it; if we give it the autonomy to create its own concepts, its own ideas about the world, it's hard to predict what will happen when its own ideas come into conflict with its preprogrammed ideas, very hard to predict how one domain will map to the other. So complete loyalty will be a significant challenge if we want our robot to be more than just an expert system; if we want it to be able to learn on its own, in the sense of self-programming, which seems crucial in many definitions of consciousness (any consciousness beyond the most simple stimulus and response), then guaranteeing its complete loyalty to its programmers may not be possible. *where robot is defined as a completely loyal, 'thinking' machine

Sorry I wasn't clearer (my clarity routine seems to have a few bugs). :o (and editor -- that was WAY too many words)

Probably a tick. I'd have it looked at (before it starts to swell). :ladybug:

Oh, you were probably plenty clear. It was long and dense, and I was looking for pictures or adds in the middle, and then I had to wonder if it was a poem. I'm not a good reader. And I enjoy razzing your butt. Its a cute one.
 
Status
Not open for further replies.

Back
Top Bottom