• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Ask RocketDodger, it was his use of 'computation'. I was just querying the logic of the subsequent argument.

I still don't see how computation can be considered to take place in the absence of a conscious mind, which appears to require the existence of life.

Any physical operation could be taken to represent some kind of computation. The convolutions that have been gone through to try to divide meaningful computation from random events got nowhere. What makes a computation a computation is its interpretation by a conscious mind.
 
So life must be conscious because it is inherently purposeful and consciousness is the source of purpose.

OK; I think that speaks for itself...

Hmm. I think that the concept that life is inherently purposeful is not soundly supported. There's no real difference between the purpose of an amoeba and the purpose of a volcano, looked at objectively.

There's certainly no purpose outside of living, conscious things.
 
Hmm. I think that the concept that life is inherently purposeful is not soundly supported. There's no real difference between the purpose of an amoeba and the purpose of a volcano, looked at objectively.

There's certainly no purpose outside of living, conscious things.

I'd go out on a limb and say that the behavior of an amoeba is inherently purpose driven in some sense -- albeit whatever motivations it might have are most certainly simpler than say, a dolphin, chimp or human. It'd be much much harder to make a similar case for an erupting volcano :)
 
rocketdodger said:
Thanks for taking the time to do that.

What I have bolded in your quote describes what somebody might learn over time through practice in dealing intelligently with a large number of people.

It's not just self-referentiality there's quite a lot of coreferentiality involved I think it's safe to say.

How does an algorithm pack in what seems like the potentially infinite amount of info needed to make the formalization you laid out?

Thanks in advance for dealing with my questions.

Well it obviously needs to be very complex -- but that isn't necessarily as big a deal as you might think.

Because you need to understand that there is a distinction between the complexity of an instance of an algorithm and the "original" or "base" or "class" or whatever you want to call it, algorithm.

For example, your DNA contains the algorithm(s) needed to describe you exactly as you are -- kind of. But they are only the "template" part of the algorithm, the part people talk about when they say "this algorithm was used to arrive at this state" or something like that. What is omitted in such a statement, but what is implied, is all the data that is included in a particular "run" or "instance" of the algorithm. In the case of you, the "template" of your DNA has produced an instance that has been "running" on data -- the environment of you and your cells -- for a long time.

Obviously, the latter is much more complex than your DNA. It would be impossible to store very much about your current state in your DNA -- there is just too much complexity in you as you are now. People think DNA has tons of storage but they are wrong -- it had tons of storage in 1970 when people's segmented core drives had only kilobytes of memory, but now when you can get terabyte drives for under $50 DNA isn't the masterpiece it used to be. The trick is that evolution led to DNA that relies upon the laws of nature to do most of it's heavy lifting.

For instance, does DNA instruct most biochemicals on how to react? No, of course not. There is no need to -- as long as DNA instructs the cells to make the chemicals, and maybe get them in proximity, basic chemistry takes over and the reactions take place. DNA doesn't even instruct enzymes on how to catalyze -- it just instructs the cells how to make the enzymes and then nature takes over. If you wanted to embed *every* step of such an algorithm in the instructions of DNA it would require orders of magnitude more complexity than is available.

And does DNA instruct your brain on how to act given a certain situation? Does DNA instruct you on how to drive a car, or speak English, or even make fundamental inferences about causation (the most basic task of any brain) ? No, of course not. All DNA has done is given your neurons a very primitive topographical arrangement -- that of a baby. The laws of nature, combined with the environment you grew up in, leads to everything else.

So, long story short, the same kind of tricks can be used to specify a formal algorithm that has emergent behavior vastly more complex than the formalization itself. Yes, the formalization your are asking for would be very complex, but it might not be as complex as you envision, because all that is required is the formalization of steps that are not implicit given the laws of nature. All the other stuff -- the steps that are implicit, that combine with the basic algorithm to generate an instance that has emergent properties and behavior -- can be omitted, and there are very many such steps.

That means you can describe an infinite set of behavior with a finite algorithm. Then what ends up being infinite are the various instances of the algorithm that occur.


Reading what I've bolded I hope you understand you have not answered the question at all.

You say "all that is required is the formalization of steps that are not implicit given the laws of nature."

OK fine. I'll refer back to your original quote now:

Frank Newgent said:
rocketdodger said:
How one might formalize normative statements such as your quote?

Any statement "X should Y" carries an implicit " .... if X wants Z" with it.

Whether someone says it or not, it is there -- even if only in the head of the person who made the statement.

So for me to say "you should be very suspicious of people who are so obsessed with objective 'good'" is really saying " you should be very suspicious of people who are so obsessed with objective 'good' if you care about people disclosing their true intentions with you, and I assume you do," etc.

Already, we are well on the road to complete formalization. In the expanded context, such as that above, "you should" clearly means something like "the behavior with the highest probability of resulting in reaching your goal, all else being equal, in my opinion, is to ..."

So now we have :

"the behavior with the highest probability of resulting in reaching your goal, all else being equal, in my opinion, is to be very suspicious of people who are so obsessed with objective 'good' if you care about people disclosing their true intentions with you, and I assume you do"

Just to get you further along, I will formalize some of the other terms in there.

"in my opinion" can be formalized to "according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists .. ... because such people are often motivated by motives that are not immediately apparent..."

"if you care about" can be formalized to "if it would benefit you somehow, even in a purely psychological manner, for .. to ..."

So now we have:

"the behavior with the highest probability of resulting in reaching your goal, all else being equal, according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists, is to be very suspicious of people who are so obsessed with objective 'good' if it would benefit you somehow, even in a purely psychological manner, for people to disclose their true intentions with you, and I assume you do, because such people are often motivated by motives that are not immediately apparent"

Should I go on?

I never said it would be short and sweet. But it can be done.



Thanks for taking the time to do that.

What I have bolded in your quote describes what somebody might learn over time through practice in dealing intelligently with a large number of people.

It's not just self-referentiality there's quite a lot of coreferentiality involved I think it's safe to say.

How does an algorithm pack in what seems like the potentially infinite amount of info needed to make the formalization you laid out?

Thanks in advance for dealing with my questions.



Again, "all that is required is the formalization of steps that are not implicit given the laws of nature".

OK, let's leave out what is implicit given the laws of nature.

There, it's done.

Now, leaving out what is implicit given the laws of nature, how will your algorithm pack in what seems like the potentially infinite amount of info needed to make the formalization you laid out?

Your formalization, again, being: "the behavior with the highest probability of resulting in reaching your goal, all else being equal, according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists, is to be very suspicious of people who are so obsessed with objective 'good' if it would benefit you somehow, even in a purely psychological manner, for people to disclose their true intentions with you, and I assume you do, because such people are often motivated by motives that are not immediately apparent".

I don't mean to go in circles here but it seems that can't be helped. You either cannot address the question I pose or you are doing what you can to sidestep it.
 
Pixy, logical equivalence is expressed by "if and only if"
No.

Malerin, all you are doing at this point is quote-mining for less precise definitions. Logical equivalence and material equivalence are not the same thing.

I don't see why you're so reluctant to have your claim reduced to logical terms.
Reluctant? All I ask is that you get it right.

Pixy, is there any possible world where consciousness occurs without SRIP occuring (or vice versa)? If not, then consciousness occurs IF AND ONLY IF SRIP occurs.
No, this is precisely where you are wrong, and have been wrong the entire time.

Wikipedia said:
Logical equivalence is different from material equivalence. The material equivalence of p and q (often written pq) is itself another statement in same object language as p and q, which expresses the idea "p if and only if q". In particular, the truth value of pq can change from one model to another.

The claim that two formulas are logically equivalent is a statement in the metalanguage, expressing a relationship between two statements p and q. The claim that p and q are semantically equivalent does not depend on any particular model; it says that in every possible model, p will have the same truth value as q. The claim that p and q are syntactically equivalent does not depend on models at all; it states that there is a deduction of q from p and a deduction of p from q.
We do not observe self-referential information processing and deduce the presence of consciousness. To observe self-referential information processing is to observe the presence of consciousness.
 
Last edited:
No.

Malerin, all you are doing at this point is quote-mining for less precise definitions. Logical equivalence and material equivalence are not the same thing.

Reluctant? All I ask is that you get it right.

No, this is precisely where you are wrong, and have been wrong the entire time.


We do not observe self-referential information processing and deduce the presence of consciousness. To observe self-referential information processing is to observe the presence of consciousness.

I don't know why you keep bringing up material equivalence. You admit your claim is a logical equivalence. I've cited numerous sources (one even an IT source, right up your ally) that shows logical equivalence is expressed verbally as "if and only if". You keep bringing up the same Wikipedia link (which proves my point anyway). Wikipedia is a stepping off point for research. To assert your claim more forcefully, you must cite other sources. you're refusal to do so can only mean you can't find other sources. Hence, you can't defend your claim.

This is not a hard concept:
A bachelor is logically equivalent to an unmarried man.
There is no possible world where a married bachelor exists.
A man is a bachelor IF AND ONLY IF he's an unmarried man (and vice versa).
Being an unmarried man is a necessary and suffficient condition for being a bachelor.

SRIP is logically equivalent to cosnciousness.
There is no possible world where consciousness occurs and SRIP does not occur.
Consciousness occurs IF AND ONLY IF SRIP occurs (and vice versa)
SRIP is a necessary and sufficient condition for consciousness

I've supported this with multiple sources. You're either being deliberately dishonest or afraid of the logical implications of your claim.

Either way, I've wasted enough time with you.
 
I don't know why you keep bringing up material equivalence.
I didn't bring it up. You did.

You admit your claim is a logical equivalence.
Right. Which is different from material equivalence, which is what you are asserting.

You keep bringing up the same Wikipedia link (which proves my point anyway).
No. It shows that you are wrong and explains why.

SRIP is logically equivalent to cosnciousness.
Right.

SRIP is a necessary and sufficient condition for consciousness
Wrong. It is not a condition. It is the same thing.

I've supported this with multiple sources.
Yes. You went looking for less precise definitions. However, you also quoted from a Wikipedia article which explicitly disagreed with you.

You're either being deliberately dishonest or afraid of the logical implications of your claim.
Again with the appeal to motive. That's a logical fallacy, Malerin; something you seem all too familar with.

I am being precise. You are demanding that I accept something that is logically distinct from what I am actually saying, for reasons that I've given several times now.

Again:

Self-referential information processing does not cause consciousness. We don't observe self-referential information and infer consciousness. One is not a condition for the other, in either direction or both.

There aren't two variables we can connect here. There is only one.

Wikipedia said:
Logical equivalence is different from material equivalence. The material equivalence of p and q (often written pq) is itself another statement in same object language as p and q, which expresses the idea "p if and only if q". In particular, the truth value of pq can change from one model to another.
I am making a statement of logical equivalence. You are insisting that this is a statement of material equvialence. It's not.

Compare
http://mathworld.wolfram.com/Defined.html
http://mathworld.wolfram.com/Biconditional.html

Notice that they are sometimes represented by the same symbol. Notice that they do not mean the same thing.

Either way, I've wasted enough time with you.
Oh, certainly. Assuming a false premise and cherry-picking quotes in a futile effort to support it was a complete waste of your time. I suggest you don't do that in the future.
 
To summarise, Malerin, I reject your statement for two reasons:

First, formally, you are incorrect. I am making a statement of logical equivalence. You are conflating it with your own statement of material equivalence. The two are distinct for the reasons given in Wikipedia and Mathworld.

Second, informally, terming self-referential information processing a condition for consciousness gives exactly the wrong impression. Consciousness is not an outcome or product or result of self-referential information processing. It is the process itself.

If you can accept those reasons, we can perhaps continue to a worthwhile discussion. If you just want to insist that you're right when you are quoting articles that explicitly disagree with you.... Then we can't.
 
This insight stems from decades of work in neuroscience and computer science.
When do you expect to publish?

We agree that subjective experiences are real, and we provide a mechanism for them.
No. You have perhaps provided a mechanism for SRIP. But that's where any "mechanism" or explanation stops.

What leap of faith is that, and why do you believe it is required?
You have basically two things.

1. Firstly the brain, extremely complicated, using neurons and associated structures to perform actions that appear to be (at least in come cases) a form of computation similar (but certainly not precisely identical) to computation as performed by some artificial neural networks (originally created to loosely model the computational aspects of the brain).

2. Secondly, we each have our subjective experiences. The feeling of pain, being aware of (at least some) of our thoughts, etc. Some might just say qualia to avoid having to spell it out over and over again with a lot of other words that are generally far more ambiguous.

A lot of work has been done by others and it's been established that these two things are highly correlated in some way. Fiddle with the brain in some area, and sometimes (but not always) there is some kind of change in subjective experience. Oh, and Douglas Hofstadter (who is not a neuroscientist) also wrote some books sharing his ideas that SRIP in some form might be an important or even crucial aspect of intelligence or consciousness.

Leap of faith: Consciousness is SRIP.

Not even that (1) might cause (2), or vice versa, by some as yet not understand physical process, but in fact you jump straight to stating they are one and the same thing. SRIP appears to almost be a certainty in the human brain but for what other reason should we have any particular reason to believe that it is one and the same thing as subjective experience? Or logically equivalent if that's something different?

We build self-referential information processing systems right now, and they have subjective experiences.
Some things I presume you claim to be having subjective experiences of their own: modern washing machines (already established), the internet, a correctly functioning flush toilet, two people talking to each about what they plan to do together (as a pair, so perhaps there are three conscious entities involved, or do their individual SRIP machines just merge and become one when that happens?), a camera with an automatic focus system pointed at its own image in a mirror, and presumably any running quine. Brainf**k version below (by Brian Raiter):

Code:
>>+++++++>>++>>++++>>+++++++>>+>>++++>>+>>+++>>+>>+++++>>+>>++>>+>>++++++>>++>>++++>>+++++++>>+>>+++++>>++>>+>>+>>++++>>+++++++>>+>>+++++>>+>>+>>+>>++++>>+++++++>>+>>+++++>>++++++++++++++>>+>>+>>++++>>+++++++>>+>>+++++>>++>>+>>+>>++++>>+++++++>>+>>+++++>>+++++++++++++++++++++++++++++>>+>>+>>++++>>+++++++>>+>>+++++>>++>>+>>+>>+++++>>+>>++++++>>+>>++>>+>>++++++>>+>>++>>+>>++++++>>+>>++>>+>>++++++>>+>>++>>+>>++++++>>+>>++>>+>>++++++>>+>>++>>+>>++++++>>++>>++++>>+++++++>>+>>+++++>>+++++++>>+>>+++++>>+>>+>>+>>++++>>+>>++>>+>>++++++>>+>>+++++>>+++++++>>+>>++++>>+>>+>>++>>+++++>>+>>+++>>+>>++++>>+>>++>>+>>++++++>>+>>+++++>>+++++++++++++++++++>>++>>++>>+++>>++>>+>>++>>++++>>+++++++>>++>>+++++>>++++++++++>>+>>++>>++++>>+>>++>>+>>++++++>>++++++>>+>>+>>+++++>>+>>++++++>>++>>+++++>>+++++++>>++>>++++>>+>>++++++[<<]>>[>++++++[-<<++++++++++>>]<<++..------------------->[-<.>>+<]>[-<+>]>]<<[-[-[-[-[-[-[>++>]<+++++++++++++++++++++++++++++>]<++>]<++++++++++++++>]<+>]<++>]<<[->.<]<<]
Of course this quine is also completely deterministic, and requires no external input. But clearly self-referential information processing, right? How about a BF self-interpreter running a copy of itself running this same quine?

On the other hand, I assume that you agree that a flush toilet with a broken float valve is completely unconscious. (Cistern fills up and then water goes down overflow pipe until someone flushes rather than cistern filling up and valve closing due to correctly functioning float.)

Is any of this wrong?

Do you think there is or should be any difference between the subjective experience of being dead (as you were before you were conceived) and in an extremely deep and sound sleep?
 
This is not a hard concept:
A bachelor is logically equivalent to an unmarried man.
There is no possible world where a married bachelor exists.
A man is a bachelor IF AND ONLY IF he's an unmarried man (and vice versa).
Being an unmarried man is a necessary and suffficient condition for being a bachelor.

Unfortunately, Malerin, you are incorrect.

You can have "if and only if" scenarios in which there is no logical equivalence. I learned this in high school.
 
2. Secondly, we each have our subjective experiences. The feeling of pain, being aware of (at least some) of our thoughts, etc. Some might just say qualia to avoid having to spell it out over and over again with a lot of other words that are generally far more ambiguous.

No. Qualia is not just some neural signal to those who believe in them. It's some half-magical, non-testable units of feeling that only living beigns with nervous systems have. Funny, that.
 
Hmm. I think that the concept that life is inherently purposeful is not soundly supported. There's no real difference between the purpose of an amoeba and the purpose of a volcano, looked at objectively.

There's certainly no purpose outside of living, conscious things.

I agree, and I have my doubts about 'purpose' in living, conscious things (except as an abstract concept, like 'free will').
 
I'd go out on a limb and say that the behavior of an amoeba is inherently purpose driven in some sense...

In what sense?

To me, this appears to be the same error as describing evolution as purposeful. The concept of 'purpose' is a retrospective confabulation or misinterpretation in both cases.
 
When do you expect to publish?
The work has been published.

No. You have perhaps provided a mechanism for SRIP. But that's where any "mechanism" or explanation stops.
Completely incorrect.

The problem is, how can we have this subjective point of view? There can't be some subjectiveness module in the brain, because that doesn't explain anything; it just leads to infinite regress. How can this possibly work? What is it?

What it is, is a loop. A self-referential loop in the overall process. That's the only structure that can produce the observed behaviour, and it is actually happening, and the two seem to be directly correlated.

You have basically two things.

1. Firstly the brain, extremely complicated, using neurons and associated structures to perform actions that appear to be (at least in come cases) a form of computation similar (but certainly not precisely identical) to computation as performed by some artificial neural networks (originally created to loosely model the computational aspects of the brain).
It's an information processor.

2. Secondly, we each have our subjective experiences. The feeling of pain, being aware of (at least some) of our thoughts, etc. Some might just say qualia to avoid having to spell it out over and over again with a lot of other words that are generally far more ambiguous.
It's self-referential.

Leap of faith: Consciousness is SRIP.
It's not a leap of faith, it's understanding the question.

Not even that (1) might cause (2), or vice versa, by some as yet not understand physical process, but in fact you jump straight to stating they are one and the same thing. SRIP appears to almost be a certainty in the human brain but for what other reason should we have any particular reason to believe that it is one and the same thing as subjective experience? Or logically equivalent if that's something different?
As I keep saying to Malerin, "logically equivalent" means "one and the same thing". So both those are correct.

The only thing I've done here is cut away the waffle. Consciousness is information processing, and it's self-referential. There you go. We're done. All of this is established fact.

If you want to add something to that, go ahead. What do you want to add, and why?

Some things I presume you claim to be having subjective experiences of their own: modern washing machines (already established)
Yes, the more complex ones perform self-referential information processing.

the internet
Why and how?

a correctly functioning flush toilet
Why and how?

two people talking to each about what they plan to do together (as a pair, so perhaps there are three conscious entities involved, or do their individual SRIP machines just merge and become one when that happens?)
Why and how?

a camera with an automatic focus system pointed at its own image in a mirror
Why and how?

and presumably any running quine.
Now that one is actually interesting. Yes, plausibly.

Of course this quine is also completely deterministic, and requires no external input. But clearly self-referential information processing, right? How about a BF self-interpreter running a copy of itself running this same quine?
My first response is, yes. That is conscious.

Why should the fact that it is deterministic matter?

Now, the issue with it being non-responsive is somewhat different. You can only tell that it's conscious by examining its internal function, essentially establishing behaviours that it does not directly produce.

In an example of a computational consciousness, I'd usually go for something that, in addition to the self-referential loop, has both input and output so that we can trigger responses, interact, test its conscious behaviours.

The question there is, is it conscious if we can't interact with it? I suggest that yes, it is; it's doing the same thing, just not in response to you.

On the other hand, I assume that you agree that a flush toilet with a broken float valve is completely unconscious. (Cistern fills up and then water goes down overflow pipe until someone flushes rather than cistern filling up and valve closing due to correctly functioning float.)
Dennett goes into that in some detail, explaining why thermostats - which are equivalent to a flush toilet - are not conscious, are systems too limited in their representational ability to be considered in that category.

Is any of this wrong?
A whole bunch of it, yes, as far as I can tell. Though the quine example (a program that produces its own source code as output) is a very interesting one, and of course one that Hofstadter used in his books.

Do you think there is or should be any difference between the subjective experience of being dead (as you were before you were conceived) and in an extremely deep and sound sleep?
You can wake up based on sensory input. There is still something going on there.

General anaesthesia is a better parallel. That seems to switch off consciousness completely.
 
As I keep saying to Malerin, "logically equivalent" means "one and the same thing". So both those are correct.

The only thing I've done here is cut away the waffle. Consciousness is information processing, and it's self-referential. There you go. We're done. All of this is established fact.

If you want to add something to that, go ahead. What do you want to add, and why?



And where it gets interesting. If we take the example of pain and pain asymbolia we seem to have two different processes that might need to be added. One is the behavioral motivation created by the stimulus, which seems to constitute the suffering feeling of pain and the other concerns more the actual perception of pain -- the intensity and location of it. The qualia part seems to refer to the first and not the second; but we can still be conscious of the intensity and location of pain. I guess one question might be -- are we simply conscious of this aspect of pain, or does the perception actually constitute some part of consciousness?

I've never met anyone with pain asymbolia so I have no way to know how they might answer the question of being conscious of pain unless they are asked about it.
 
Reading what I've bolded I hope you understand you have not answered the question at all.

You say "all that is required is the formalization of steps that are not implicit given the laws of nature."

OK fine. I'll refer back to your original quote now:




Again, "all that is required is the formalization of steps that are not implicit given the laws of nature".

OK, let's leave out what is implicit given the laws of nature.

There, it's done.

Now, leaving out what is implicit given the laws of nature, how will your algorithm pack in what seems like the potentially infinite amount of info needed to make the formalization you laid out?

Your formalization, again, being: "the behavior with the highest probability of resulting in reaching your goal, all else being equal, according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists, is to be very suspicious of people who are so obsessed with objective 'good' if it would benefit you somehow, even in a purely psychological manner, for people to disclose their true intentions with you, and I assume you do, because such people are often motivated by motives that are not immediately apparent".

I don't mean to go in circles here but it seems that can't be helped. You either cannot address the question I pose or you are doing what you can to sidestep it.

You missed the third option -- that you don't understand what I am saying.

Suppose there is a really dumb mouse. All it can do is turn left in a maze when there is a green wall in front of it and turn right when there is a red wall in front of it.

Suppose we put the mouse in a maze full of red and green walls, and no other wall colors.

What is a formalization for the mouses behavior?

I am saying a sufficient formalization is "if wall is green, turn left, if wall is red, turn right."

You seem to be saying the required formalization would be "wall 1 is green, so mouse turns left, wall 2 is red, so mouse turns right, wall 3 is red, so mouse turns right, ... wall N is green, so mouse turns left."

My whole last post was to explain that the latter can be fully derived from the former if you can look at the maze. If you can figure out the behavior rules that DNA explicitly encodes, the rest of a person's life follows implicitly from their environment.

Which is what many religious folk can't understand. DNA doesn't need to encode every possible behavior of your body and every chemical in it, just like a ball doesn't encode the behavior of dropping and wood doesn't encode the behavior of burning. So to come up with a sufficient formalization of a person's behavior is going to be much less complex than a full description of that person's behavior.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom