• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
And where it gets interesting. If we take the example of pain and pain asymbolia we seem to have two different processes that might need to be added. One is the behavioral motivation created by the stimulus, which seems to constitute the suffering feeling of pain and the other concerns more the actual perception of pain -- the intensity and location of it. The qualia part seems to refer to the first and not the second; but we can still be conscious of the intensity and location of pain. I guess one question might be -- are we simply conscious of this aspect of pain, or does the perception actually constitute some part of consciousness?

I've never met anyone with pain asymbolia so I have no way to know how they might answer the question of being conscious of pain unless they are asked about it.
Sure. I think the question of consciousness itself is a red herring. It's something that's actually a simple physical process that we're using as a proxy for all this other complex stuff.
 
Sure. I think the question of consciousness itself is a red herring. It's something that's actually a simple physical process that we're using as a proxy for all this other complex stuff.


I wonder what a minimal list would look like? We've got at least the reticular activating system and the cingulate (which is often conceived as part of the RAS) for being awake, alert and having this sense of motivation to act as well as several different perceptual systems and the SRIP system of the parietal lobe attention and body planning areas.

There is just no sense at all of any one thing; it's massive parallel processing all the way.


ETA:

Or, rather, it is very likely that all of those systems depend on SRIP, but there are lots of different systems at play. The nervous system is set up in big relay loops, after all.
 
Last edited:
The only thing I've done here is cut away the waffle. Consciousness is information processing, and it's self-referential. There you go. We're done. All of this is established fact.

If you want to add something to that, go ahead. What do you want to add, and why?

Those last two questions seem to be the crux of this thread...
 
I wonder what a minimal list would look like? We've got at least the reticular activating system and the cingulate (which is often conceived as part of the RAS) for being awake, alert and having this sense of motivation to act as well as several different perceptual systems and the SRIP system of the parietal lobe attention and body planning areas.

There is just no sense at all of any one thing; it's massive parallel processing all the way.


ETA:

Or, rather, it is very likely that all of those systems depend on SRIP, but there are lots of different systems at play. The nervous system is set up in big relay loops, after all.
"Consciousness is not a homogenous entity it is a heterogenous confabulation of multiple processes."
 
You missed the third option -- that you don't understand what I am saying.

Suppose there is a really dumb mouse. All it can do is turn left in a maze when there is a green wall in front of it and turn right when there is a red wall in front of it.

Suppose we put the mouse in a maze full of red and green walls, and no other wall colors.

What is a formalization for the mouses behavior?

I am saying a sufficient formalization is "if wall is green, turn left, if wall is red, turn right."

You seem to be saying the required formalization would be "wall 1 is green, so mouse turns left, wall 2 is red, so mouse turns right, wall 3 is red, so mouse turns right, ... wall N is green, so mouse turns left."

My whole last post was to explain that the latter can be fully derived from the former if you can look at the maze. If you can figure out the behavior rules that DNA explicitly encodes, the rest of a person's life follows implicitly from their environment.

Which is what many religious folk can't understand. DNA doesn't need to encode every possible behavior of your body and every chemical in it, just like a ball doesn't encode the behavior of dropping and wood doesn't encode the behavior of burning. So to come up with a sufficient formalization of a person's behavior is going to be much less complex than a full description of that person's behavior.


Not talking about dumb mouse behavior. I have no religious motivation. Not asking about DNA encoding.

Try again.
 
I'd go out on a limb and say that the behavior of an amoeba is inherently purpose driven in some sense...

In what sense?

To me, this appears to be the same error as describing evolution as purposeful. The concept of 'purpose' is a retrospective confabulation or misinterpretation in both cases.

Market forces are not purposeful, but the actions of individual consumers are. Likewise, while evolution may not be purposeful in itself, the behaviors of individual organisms are.

ETA: BTW, are you arguing that there is no such thing as purpose, motivation, or intent?
 
Last edited:
Not talking about dumb mouse behavior. I have no religious motivation. Not asking about DNA encoding.

Try again.

Try thinking, Frank. It helps sometimes. The logic of our discussion is very clear:

1) I claimed that all of human experience can be described mathematically.
2) You asked for the formalization of the human sentiment "ought."
3) I started such a formalization, at a very high level, and explained how one would proceed to lower and lower levels. I also explained that I have neither the time nor desire to proceed to those lower levels because it would be very complex.
4) You expressed the idea that if it was so complex perhaps it could not be done.
5) I explained to you how some things that seem very complex are in fact not as complex as you think.

Now, allow me to add

6) I still have neither the time nor desire to proceed to more detailed formalizations. My point was just that "beyond what I feel like doing" doesn't imply "impossible because of complexity."

7) Your current position is tantamount to asking "show me that you can reach 203295039226 by repeatedly adding 1 to a smaller number" and then not accepting a proof by induction because "that is not what you asked for." And every time I show you "see, when you add 1, the number gets bigger by 1 ... eventually that sum will be reached" you parrot "but you haven't reached it yet." Brilliant.
 
rocketdodger said:
Not talking about dumb mouse behavior. I have no religious motivation. Not asking about DNA encoding.

Try again.

Try thinking, Frank. It helps sometimes. The logic of our discussion is very clear:

1) I claimed that all of human experience can be described mathematically.
2) You asked for the formalization of the human sentiment "ought."
3) I started such a formalization, at a very high level, and explained how one would proceed to lower and lower levels. I also explained that I have neither the time nor desire to proceed to those lower levels because it would be very complex.
4) You expressed the idea that if it was so complex perhaps it could not be done.
5) I explained to you how some things that seem very complex are in fact not as complex as you think.

Now, allow me to add

6) I still have neither the time nor desire to proceed to more detailed formalizations. My point was just that "beyond what I feel like doing" doesn't imply "impossible because of complexity."

7) Your current position is tantamount to asking "show me that you can reach 203295039226 by repeatedly adding 1 to a smaller number" and then not accepting a proof by induction because "that is not what you asked for." And every time I show you "see, when you add 1, the number gets bigger by 1 ... eventually that sum will be reached" you parrot "but you haven't reached it yet." Brilliant.


No.

I am addressing a possible problem with the theory of computationalism/functionalism which you refuse to address.

The problem is computationalism/functionalism can't account for some mental states being dependent in part on their relationship to the external world.
 
No.

I am addressing a possible problem with the theory of computationalism/functionalism which you refuse to address.

The problem is computationalism/functionalism can't account for some mental states being dependent in part on their relationship to the external world.

Like what?

How is "ought" related to "relationship with the external world?"

You seem to be going down two different paths here ...
 
Unfortunately, Malerin, you are incorrect.

You can have "if and only if" scenarios in which there is no logical equivalence. I learned this in high school.

Source? Everything I've found equates IFF to logical equivalence.

ETA:

Certainly it is the case that when A is logically equivalent to B, "A iff B" is true.
http://en.wikipedia.org/wiki/If_and_only_if

Example:

A person is a bachelor IFF that person is a marriageable man who has never married.
(same wiki source)

"Bachelor" and "marriageable man who has never married" are logically equivalent. It's beeing asserted that SRIP and "consciousness" are logically equivalent. Therefore, based on the preceeding example of logical equivalence,:
A person is conscious IFF that person is doing SRIP.

This is elementary propositional logic. If you (or anyone) disagrees, they NEED to post a source. I'e posted several.
 
Last edited:
Source? Everything I've found equates IFF to logical equivalence.
That is untrue. You did find some sources that make that mistake, but if you ever bother to read that Wikipedia article I've quoted for you four times, it explains precisely why this is wrong.
 
And each time you quote it you undermine your argument:

The logical equivalence of p and q is sometimes expressed as P IFF Q.
http://en.wikipedia.org/wiki/Logical_equivalence

Forget material equivalence. You're not making a claim of material equivalence, are you?

You also realize that WIKI is the jumping off point for sourcing and can be edited to say whatever someone wants it to say.

PLease post a source that backs up your claim (and doesn't back up mine, LOL).
 
And each time you quote it you undermine your argument:

The logical equivalence of p and q is sometimes expressed as P IFF Q.
http://en.wikipedia.org/wiki/Logical_equivalence
Nice cherry picking and misquoting there, Malerin.

The real article on Wikipedia says this:
The logical equivalence of p and q is sometimes expressed as
62fc42fca0a550c54858c542720e8da7.png
or
07f57e22a30b99cfedead170a2689e3b.png
. However, these symbols are also used for material equivalence; the proper interpretation depends on the context. Logical equivalence is different from material equivalence, although the two concepts are closely related.
In other words, you are wrong, the article you just quoted says you are wrong, and you explicitly removed the part of the quote that explains why.
 
rocketdodger said:
No.

I am addressing a possible problem with the theory of computationalism/functionalism which you refuse to address.

The problem is computationalism/functionalism can't account for some mental states being dependent in part on their relationship to the external world.

Like what?

How is "ought" related to "relationship with the external world?"

You seem to be going down two different paths here ...


Not at all.

Referring to how some mental content, such as adherence to a norm, is dependent in part on the mind's relationship to the external world.

norm noun \ˈnȯrm\
Definition of NORM
1: an authoritative standard : model
2: a principle of right action binding upon the members of a group and serving to guide, control, or regulate proper and acceptable behavior

http://www.merriam-webster.com/dictionary/norms?show=0&t=1294456142


Simply put, that relationship is external to the individual's nervous system.

Looks to me as if computationalism fails at describing this particular aspect of consciousness.
 
And each time you quote it you undermine your argument:

The logical equivalence of p and q is sometimes expressed as P IFF Q.
http://en.wikipedia.org/wiki/Logical_equivalence

PixyMisa said:
Nice cherry picking and misquoting there, Malerin.

Where's the misquote? Where's the cherry-picking? Does it or does it not say that: The logical equivalence of p and q is sometimes expressed as P IFF Q?


PixyMisa said:
The real article on Wikipedia says this:
The logical equivalence of p and q is sometimes expressed as
62fc42fca0a550c54858c542720e8da7.png
or
07f57e22a30b99cfedead170a2689e3b.png
.

Actually, it says this:

This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2009)
http://en.wikipedia.org/wiki/Logical_equivalence

Anyway, Pixy, are you telling me you don't know what three lines or a two-line double arrow means in logic?
http://en.wikipedia.org/wiki/List_of_logic_symbols

im3_1.JPG



PixyMisa said:
In other words, you are wrong, the article you just quoted says you are wrong, and you explicitly removed the part of the quote that explains why.

I'm not wrong, as I've shown with several other sources. Material equivalence is irreleveant as you're asserting a logical equivalence between SRIP and consciousness. I don't need a "proper interpretation"- you've made your claim of logical equivalence quite clear.

Now please, do you have another source besides Wiki? Wiki is a good starting point, but that's all you seem to have in your bag of tricks. I'll remind you again what it says at the top of your source you keep citing:

This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2009)

http://en.wikipedia.org/wiki/Logical_equivalence

What kind of skeptic are you? If you can't find anything else to agree with you besides an unsourced unreferenced article in an online encylopedia that can be edited at will, concede the point and let's move on.
 
Last edited:
No. Qualia is not just some neural signal to those who believe in them. It's some half-magical, non-testable units of feeling that only living beigns with nervous systems have. Funny, that.

The really funny thing about it is that there are living beings with nervous systems who have experiences, and who seem to be unhappy about the fact, and seem to think that being alive and having emotions is some kind of magic trick to be swept under the carpet. Life's dirty little secret.
 
ETA: BTW, are you arguing that there is no such thing as purpose, motivation, or intent?

Subjective human experience is a problem for a materialist view of the universe. If materialism is to be taken seriously, then a whole lot of concepts have to be denied existence. Purpose is just one of them. There's no "purpose" in a scientific analysis. So either "purpose" has to be given an entirely objective definition, or it has to be regarded as meaningless.

This process involves throwing out a lot of babies with the bathwater, so some of the more timid materialists will try to cling on to purpose, values, meaning etc, wandering around it and looking for get out options. There really aren't any though.
 
No.

I am addressing a possible problem with the theory of computationalism/functionalism which you refuse to address.

The problem is computationalism/functionalism can't account for some mental states being dependent in part on their relationship to the external world.

The "I don't have the time and inclination to explain this right now" bit is not remotely valid. The difference between the self-contained computationalist analysis and the actual interactive brain is fundamental, and remains the big hole in the theory. If the gap can be bridged, it should be possible to walk the steps, in principle at least. That's never successfully been done.
 
ETA: BTW, are you arguing that there is no such thing as purpose, motivation, or intent?

Yes, as it happens, I am. I suggest that these are abstract convenience concepts to maintain a dualistic sense of wilful self (much like the concept of free will).

As I see it, we introspect and evaluate (or confabulate) our most likely and/or desirable course of action, and label this our 'intent', and we call our 'purpose' or 'motivation' the causes (explanations/reasons) we introspect and/or confabulate for this most likely and/or desirable course of action. We can also model the probable actions of others and label them in the same way, but these labels are convenient fictions that overload the underlying semantics.

What I'm really saying is that the wilful self is a convenient fiction (unlike Will Self who, conveniently, writes fiction), and these terms simply bolster that fiction.

You may find this a falsely mechanistic way of looking at the subject, but I hope you won't dismiss it without reasonable consideration.
 
Last edited:
dlorde said:
ETA: BTW, are you arguing that there is no such thing as purpose, motivation, or intent?

Yes, as it happens, I am. I suggest that these are abstract convenience concepts to maintain a dualistic sense of wilful self (much like the concept of free will).

As I see it, we introspect and evaluate (or confabulate) our most likely and/or desirable course of action, and label this our 'intent', and we call our 'purpose' or 'motivation' the causes (explanations/reasons) we introspect and/or confabulate for this most likely and/or desirable course of action. We can also model the probable actions of others and label them in the same way, but these labels are convenient fictions that overload the underlying semantics.

What I'm really saying is that the wilful self is a convenient fiction (unlike Will Self who, conveniently, writes fiction), and these terms simply bolster that fiction.

You may find this a falsely mechanistic way of looking at the subject, but I hope you won't dismiss it without reasonable consideration.


Why?
 
Status
Not open for further replies.

Back
Top Bottom