• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
So, to summarize, a wasp has no self-awareness but its body has many self-aware parts?


No... that is not what I am saying.... I am saying that the wasp is self-referential and so are its parts.

However, just like its DNA, it is not conscious.... but it is DEFINITELY more conscious by any stretch of the concept than any computer program we have today.

In other words if we are to claim that a computer program is self-aware we would have to most adamantly admit the same to a cockroach long before we do so to the computer.

But Pixy claims that this utterly and totally self-referential system is not conscious while at the same time claiming that a computer program that is orders of magnitude less self referential is conscious according to his definition of the concept which is based on self-reference…..assuming of course that you ignore for a moment the circularity of his definition.


It does not compute...:p
 
Last edited:
PixyMisa said:
If any part of this classification and correlation by virtue of its similarities to other patterns is deficient or absent then this classification and correlation through similarities to other patterns, likewise, would be deficient or absent... ie the correspondence would not exist.
All or nothing fallacy.

Not
 
I'd say that on the contrary, it's a clear-cut disqualification.

It's aware of its environment, but it has no awareness of its own behaviours. Reflection, self-reference, self-awareness, consciousness, mind - call it what you will, the wasp doesn't have it, but the computer program does, and so do we.


So being aware of its own need to eat and drink and reproduce as well as being aware of its own injuries and self-healing is not being self aware?

Its ability to analyze the movement of its pray and calculate the actions required to MODIFY ITS BEHAVIOR so as to manage to home in on a pray trying its hardest to FOOL it, is not being aware of its behavior? Have you ever gone hunting?

You are right though.... it is NOT.... I wholeheartedly agree.... it is not self-aware.

But how is a computer program any different?

If you claim all the above complexity and self-referencing is just INSTINCT then how do you define instinct?

Is instinct the ability to follow a SET PROCEDURE (heuristic if you will) that is “hardwired” in the organism?

So how is a program in a computer not instinct? Is it just because the computer can change parts of its program? That is not true though….because when it changed that part it only replaced it with another part that was already there (i.e. hardwired/programmed). When an adaptive program changes some lines of code in its normal procedure with some lines of code it copies from a different part of its memory it is STILL only following a heuristic and it is still using INSTINCT. Even if the replacement part is calculated by a another heuristic instead of just copying from a set memory area IT IS STILL just following ANOTHER INSTINCTUAL part of the overall instinct.

So if the wasp is not self-aware despite all that appearance of self-awareness of its own needs and its bodily actions and you call that instinct..... how then is a program that is doing the same but following a DIFFERENT INSTINCT (i.e. a program of set procedures) any different.


At the very least the wasp did not have any human being or otherwise WRITE UP the instinct for it. So its existence is UTTERLY self referential by virtue of the fact that it EVOLVED.

A computer program was CREATED by a conscious being….. a wasp was NOT CREATED by anything except through its SELF-ADAPTATION.

In case someone might have missed that last part
A wasp SELF-CREATED by SELF-ADAPTATION the ultimate in self-referncing I'd say. Unlike a computer program that was CREATED by a conscious entity..... so how is self-creation by self-adaptation any less self-aware than a program?​
 
Last edited:
Long thread to read it all, so apologies if I am going over old ground.

PixyMisa said:
Consciousness is self-referential information processing.

By this I mean that consciousness is the processing of information - thinking, remembering, making decisions - that refers back to its own state and activity.

When refer to someone or something as conscious, we're talking about the ability of the system to reflect on its own mental state. A living creature that acts directly upon stimuli is not conscious. A system that has a model of the external world is not automatically conscious, no matter how detailed and extensive that model may be.

A system that also includes a model of its own processing, and includes that model in its decision path, is conscious.


I think your definition is reasonable but limited. You have structured it to include computers, but no-one likes to think computers are conscious. If life is simply a complex mechanical-physical arrangement of structures then perhaps your definition is suitable.

Despite the posts about complexity of brain versus machine, I would refer to this site on intelligence, where 200 lines of code mimic "intelligence".

http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/msb.html

ELIZA is a simple program written by Joseph Weizenbaum that in one version imitates a psychiatrist...

"Joe, will you please logout." WE WERE DISCUSSING YOU, NOT ME "I need to use the computer." DO COMPUTERS WORRY YOU? "No, computers don't worry me." WHAT DO YOU THINK MACHINES HAVE TO DO WITH YOUR PROBLEM? "Come off it." WHY DO YOU SAY THAT? The colleague, a professional who knew all about ELIZA had no idea that he wasn't talking to a human, and called Weizenbaum in a rage.


As to comparisons of the brain with the internet, putting a 100 average students into a room does not make an Einstein. The individual sophistication of cells in the brain acting in concert with electro-chemical and hormonal stimulation is orders of sophistication higher.

I (like others here) have written programs for manufacturing machines using input and output. Despite being simple they appear intelligent, although repetitively so. When the logic becomes "fuzzy", and self-learning is added, machines become a lot more sophisticated (and unpredictable). Adding a level of "self-awareness", and using multiple redundancy to overcome defects does not make the machine intuitively conscious.

A computer does not derive it's "programming" from a small collection of molecules that then grows reproductively (from the human embryonic cell) into an extremely sophisticated "self-aware" entity from the moment of birth. Yes, self-awareness grows dramatically with age - at my age of 63 I am only too "self-aware" of my brain and body degrading.

Where is that "programming" stored in the embryo? To me, this separates carbon-based life-forms from mechanical silicon structures with informational processing. A wasp may have a primitive low-level consciousness, but I doubt we will know anytime soon.
 
I think your definition is reasonable but limited. You have structured it to include computers, but no-one likes to think computers are conscious. If life is simply a complex mechanical-physical arrangement of structures then perhaps your definition is suitable.
Well, the question is, if life isn't just a complex arrangement of physical structures (really, processes, but I take your point) - then what is the other component?

As to comparisons of the brain with the internet, putting a 100 average students into a room does not make an Einstein. The individual sophistication of cells in the brain acting in concert with electro-chemical and hormonal stimulation is orders of sophistication higher.
Higher than individual transistors? Sure. A neuron is probably best considered as a circuit of somewhere between 1000 and 10,000 transistors. Three or four orders of magnitude more complex.

However, you can't just stop there. Transistors operate on the order of a million times faster than neurons. So while it may take 1000 transistors to emulate a neuron, those thousand transistors can process a million times as much information as a neuron.

I (like others here) have written programs for manufacturing machines using input and output. Despite being simple they appear intelligent, although repetitively so. When the logic becomes "fuzzy", and self-learning is added, machines become a lot more sophisticated (and unpredictable). Adding a level of "self-awareness", and using multiple redundancy to overcome defects does not make the machine intuitively conscious.
Perhaps not intuitively. But you have to ask, what is it that is missing - if anything?

Remember the sphex wasp - it's unquestionably alive, but it doesn't exhibit even the self-awareness of a simple reflective monitoring routine.

Life in itself cannot be the difference between conscious and unconscious systems.

A computer does not derive it's "programming" from a small collection of molecules that then grows reproductively (from the human embryonic cell) into an extremely sophisticated "self-aware" entity from the moment of birth. Yes, self-awareness grows dramatically with age - at my age of 63 I am only too "self-aware" of my brain and body degrading.
Human infants aren't actually self-aware - for example, they fail the mirror test. They're basically inference engines, furiously forming and testing hypotheses but with no review or overall plan. Consciousness appears to emerge around 18 months.

Where is that "programming" stored in the embryo? To me, this separates carbon-based life-forms from mechanical silicon structures with informational processing. A wasp may have a primitive low-level consciousness, but I doubt we will know anytime soon.
See above. It's not stored anywhere. Babies are born without it. It forms later as it emerges from an adaptive, iterative, training algorithm. (A process which literally reshapes the connections in the brain.)
 
I must admit that I have to keep pulling my eyebrows back down when I read what PixyMisa is posting on this thread. Of course I can't really criticize him until I have a better definition of "consciousness"...
My interpretation of his approach is that, having identified the fundamental basis of consciousness as self-referential information processing, and given the level of opinion that human consciousness is, or must be, more than that, he's prompting for suggestions as to what more is considered necessary.
 
My interpretation of his approach is that, having identified the fundamental basis of consciousness as self-referential information processing, and given the level of opinion that human consciousness is, or must be, more than that, he's prompting for suggestions as to what more is considered necessary.


Anyone who claims to know as would be needed to conclude that a computer program IS conscious, should go and tell the scientists quoted below that their search is over and maybe win their gratitude for solving their quest.

Since it is quite obvious that if a wasp, which is an extremely complex self-referential system, is by Pixy's own admission not conscious then I am guessing there is a LOT MORE that is needed.

However, as these scientists who are working on the topic day in and day out for a career have said
...a quote from a book on human brain function written by 5 neuroscientists:


"We have no idea how consciousness emerges from the physical activity of the brain and we do not know whether consciousness can emerge from non-biological systems, such as computers... At this point the reader will expect to find a careful and precise definition of consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. Currently we all use the term consciousness in many different and often ambiguous ways. Precise definitions of different aspects of consciousness will emerge ... but to make precise definitions at this stage is premature."


....I guess it's actually your assertion that is unsupported. These 5 neuroscientists seem to completely agree with !Kaggen. But that's ok...you're just wrong (...or what was it Wolfgang Pauli once said..." not even wrong " ).
 
Last edited:
My interpretation of his approach is that, having identified the fundamental basis of consciousness as self-referential information processing, and given the level of opinion that human consciousness is, or must be, more than that, he's prompting for suggestions as to what more is considered necessary.
Yep. I've been asking for that for years. And I have actually had a handful of reasonable suggestions.
 
My interpretation of his approach is that, having identified the fundamental basis of consciousness as self-referential information processing, and given the level of opinion that human consciousness is, or must be, more than that, he's prompting for suggestions as to what more is considered necessary.


Moreover, anyone who claims that
This is precisely the problem I have addressed. I have offered a precise, consistent, operational definition of conscious, one that accounts at a basic level for every feature we ascribe to consciousness (and that is actually in evidence).

Should seriously consider informing the neuroscientists who are saying
"We have no idea how consciousness emerges from the physical activity of the brain and we do not know whether consciousness can emerge from non-biological systems, such as computers... At this point the reader will expect to find a careful and precise definitionof consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. Currently we all use the term consciousness in many different and often ambiguous ways. Precise definitions of different aspects of consciousness will emerge ... but to make precise definitions at this stage is premature."
 
Last edited:
Yep. I've been asking for that for years. And I have actually had a handful of reasonable suggestions.
What I find odd is that sfaik you've not written a book distilling Hofstadters ideas into readable format so the world can be brought up to speed on this breakthrough.
 
Last edited:
My interpretation of his approach is that, having identified the fundamental basis of consciousness as self-referential information processing, and given the level of opinion that human consciousness is, or must be, more than that, he's prompting for suggestions as to what more is considered necessary.


Yep. I've been asking for that for years. And I have actually had a handful of reasonable suggestions.


Ok.... so if you do not know what else is required then how is it that you could actually conclude
at least some of the applications typically found on a modern computer are conscious.

How can you GAUGE and ESTABLISH the above assertion if you do not even know what constitutes consciousness yet?

Also when you say
I've written such programs myself.

If you do not yet know what consciousness is and you have been asking about it for years, then what PARAMETERS did you use to establish the fact that these programs you wrote and are claiming to be conscious were in fact so?

What criteria did you use to conclude that computers are conscious and that you have written conscious programs if in fact you do not know what consciousness is since you have been asking for years what else is needed to completely be sure what it is and thus be able to gauge if a computer program is in fact conscious as you are apparently claiming they are in the above two quotes?
 
Last edited:
What I find odd is that sfaik you've not written a book distilling Hofstadters ideas into readable format so the world can be brought up to speed on this breakthrough.



Not to mention at least informing the scientists who have written a book and said


...a quote from a book on human brain function written by 5 neuroscientists:


"We have no idea how consciousness emerges from the physical activity of the brain and we do not know whether consciousness can emerge from non-biological systems, such as computers... At this point the reader will expect to find a careful and precise definition of consciousness. You will be disappointed. Consciousness has not yet become a scientific term that can be defined in this way. Currently we all use the term consciousness in many different and often ambiguous ways. Precise definitions of different aspects of consciousness will emerge ... but to make precise definitions at this stage is premature."


....I guess it's actually your assertion that is unsupported. These 5 neuroscientists seem to completely agree with !Kaggen. But that's ok...you're just wrong (...or what was it Wolfgang Pauli once said..." not even wrong " ).
 
Last edited:
So I guess what we have to conclude from this…is that the neuroscience community is wrong…and you are right.

Glad we’ve managed to clear that up.



But apparently according to this post it seems he does not in fact know or at least is not yet sure....but still obviously sufficiently sure to conclude that the programs he wrote on his computer are conscious.


My interpretation of his approach is that, having identified the fundamental basis of consciousness as self-referential information processing, and given the level of opinion that human consciousness is, or must be, more than that, he's prompting for suggestions as to what more is considered necessary.


Yep. I've been asking for that for years. And I have actually had a handful of reasonable suggestions.
 
Last edited:
Yes. And they are talking about the general usage of the word, not the specific definition I've offered.

If the experts in the field disagree with you, simply conjure up your own definition to suit yourself.

Works every time.
 
Wasps can reproduce themselves and can SELF-HEAL damaged parts. Don't you call this self referential? Isn't the process of reproducing a copy of itself a Self Referential Process? Doesn't it have to refer to itself to reproduce parts of itself? Does it also take some parts of the environment to actually recreate those NEW cells it created? In other words it CREATES NEW elements of itself out of the chemicals and elements in its surroundings.

So a computer program is self aware because it can refer back to the program code it is running and modify it according to another block of code that is part of the program....right? Or are you saying that the program can create out of the blue its own code with which to modify the code that it is currently in need of replacing and it does that just like that without first running some code that specifies the procedure for how to do that?

A self-referential (adaptive) program can rewrite its own code by copying another bit of code from another area of its memory (or some other storage) or it can create a piece of replacement code according to an algorithm (set procedure) that is in itself a set piece of code in another section of the program.

A computer cannot ingest or consume anything external to its own self and then modify this to create new or replacement parts of itself.... unless say you enable it to refer to some network server where it can copy code from a storage device on the server....... but in this case SOMEONE had to put that code there..... and it is CODE not some random ones and zeros that it manages to reorganize into coherent code all by itself out of the blue.


Any way....how is that different from DNA or a wasp or an Ameba?

A wasp that can HEAL from an injury is doing precisely a self referential procedure.
  • It is referring to its parts that have been damaged (self aware??)
  • It then recognizes the fact that they are damaged (self aware??)
  • It then refers to something in itself (??) to figure out how to reconstruct the damaged parts (self aware and thinking???)
  • It then performs another entirely separate SUB-PROCEDURE that is also in itself a self referential process which is the process of REPLICATING bits of its DNA to recreate NEW cells and parts that are done according to an algorithm (DNA+RNA) and using chemicals that are part of its nutritional intake.
  • In that process it has also modified and organized elements in the world around it to create an entirely new thing (parts of itself) or entirely new organism running all by itself (its children).

So if as you rightly said a wasp is not self aware but the above procedure it carries out is actually more complex and more self referential and adaptive than any computer is, then why do you call the computer conscious but not the wasp?

I've never found the definition of "self-referential" remotely rigourous or well thought out.*

*Of course that is in no way intended to show disrespect to any individual.
 
Last edited:
If the experts in the field disagree with you, simply conjure up your own definition to suit yourself.

Works every time.
Order counts!

If you conjured up your own definition before you learned of how experts define the word then as long as you use your definition consistently, it is valid.
 
Uh Oh! My eyebrows are going up again!

Surely you must agree that there is something lacking in the definition of consciousness if an inanimate object gets to be considered self-aware before a living thing does.

Perhaps the idea is to eventually have only computers able to be conscious. Proving that human beings aren't conscious has to be the next step.
 
I'm sorry I thought you understood the point of the machine -- you don't.

There is no "point of the machine", rocketdodger. (Unless you're assuming your conclusions.)

You're simply asking questions about what would happen to a body's awareness under 2 different physical conditions.

When you agreed to the first part, I assumed it meant you understood that the machine was capable of magically "patching up" the fact that some particles were farther from each other than they should be, so that the new interactions between all the particles were effectively the same as the old ones even though the distances are completely different.

Of course I understood that. Still do.

The only reason I kept the head intact is because I knew it would be less extreme than some other possibilities, case in point you agreed with that scenario but disagree with this one yet they are actually the same from a technical standpoint.

They are not at all the same from a technical standpoint.

In one condition, you separate the body from the head which has no effect on the brain (given the effect of the machine) which is the organ that makes consciousness start and stop.

In another condition, you vaporize the brain itself and spread it across the galaxy... but you keep the particles moving in the same manner that they all would be if they were all together, including the ways in which they would be affected by other particles.

The vaporization of the brain is significant here, although you seem to believe it is not. And the relative behaviors of the now distant particles is of much less importance than you obviously appreciate.

The difference is like asking what if I cut my truck in two between the cab and the bed, and magically kept the particles going back and forth... and what if I vaporized it and spread it across the galaxy and kept the particles responding to each other.

Only in the first scenario could my truck still do the work it does now, including small-scale stuff like firing the spark plugs and moving exhaust. Ditto for the brain.

Your "semantics don't matter" approach leads you to absurdity upon absurdity, but this apparently only makes the hypothesis that much more awesome in your estimation.

The problem is, when Planck introduced his shocking work, he noted that he was forced to accept it because of the hard evidence. And when Einstein introduced his shocking work, he could demonstrate concretely how our currently accepted science implied that these shocking things must be true.

Your philosophy, however, has no such basis. Which means you should take its absurdities more as red flags than green lights.

So lets go back to an even simpler example and make sure we agree to it. Instead of you and the space you are in, lets just look at two particles. Not even two atoms, just two particles.

If, by definition, the machine's action is this:

1) Applying some spatial/temporal transformation to one of the particles ( any of the 4 or more dimensions we know of )
2) "Patching up" the interactions between the two particles such that if behavior A of particle 1 would lead to behavior A of particle 2, 1(A) --> 2(A), in the original "un-transformed" setup, behavior A of particle 1 will lead to behavior A' of particle 2, 1(A)-->2(A'), where the ' denotes that A' is identical to A other than the fact that it has the transformation applied to it.

In other words, if particle 1 would move a little and disturb particle 2 such that it moves a little, the machine would keep this interaction consistent *even if* the transformation applied took particle 2 thousands of lightyears from particle 1. Meaning, the particles would have no idea they were that far from each other -- their causal interactions with each other are effectively the same.

Do you accept this magical action of the machine? Do you accept that after the action of the machine the causal interactions between the two particles are effectively identical?

NOTE that one can view this machine as simply a "modified" laws of nature. We don't know "why" particles act the way they do, we just know "how" they act ( to the extent that we can determine that ). So this machine is simply something that insures the "how" of particle behavior is "locked down" to a given set by drastically changing the "why" of their action.

Dude... I know what you're saying, and it doesn't matter.

You can't vaporize a working physical system, strew it across thousands of light years, and expect that this system can do the same work it did before, even if you keep the particles responding to each other magically.
 
PixyMisa said:
Well, the question is, if life isn't just a complex arrangement of physical structures (really, processes, but I take your point) - then what is the other component?


If the other component is not meta-physical, then we are complex machines, and I accept your definition. A complex machine with a complex program appears to be sentient, and could even exhibit "feelings". Whether a machine is capable of "feelings" is difficult. It could be programmed to mimic feelings as if in response to hormones. Star Trek's Data seems to strive for "feelings".

The incredible joy of listening to a complex but beautiful piece of music, or the almost mystical ecstasy when admiring nature, would seem to be hard to program. Would you know or be able to measure a machines blissful state? Or would it just be "faking it".


For those of you who want some laughs (or maybe a free counselling session), download Dr Sbaitso. Or just read

http://www.thenervousbreakdown.com/jbenton/2010/09/dr-sbaitso-me/

In 1991, my 15 year old daughter "interacted" with this MS-DOS AI program with disrespect and abuse. It seemed targeted to respond to teenage rebellion, and I was impressed at the appropriate responses given in a "mechanical voice".

PixyMisa said:
See above. It's not stored anywhere. Babies are born without it. It forms later as it emerges from an adaptive, iterative, training algorithm. (A process which literally reshapes the connections in the brain.)


Human babies start off helpless. Yes. And maybe without the mechanisms to demonstrate self-awareness. A young buck is born, and within the hour it begins to walk, and knows that it must run from danger. Where did that "programming" come from? The DNA strands have incredible complexity to get all the cell division to form various organs, but how does it form a brain that is "pre-programmed"?
 
Status
Not open for further replies.

Back
Top Bottom