• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Will I will start by drawing the line at language . More specifically abstract representations. In humans that would be the first time we recognize the abstract representation essential to all others. That would be when we say "I" for the first time. Approximately the age when we have mastered basic language and we start thinking. By thinking I mean verbal cognition.

How's that?
That is useful, although I was referring more to the criteria for distinguishing minimal consciousness from non-consciousness, i.e. to decide where on the scale of complexity consciousness is first manifest in a system - be it in microbes, plants, insects, reptiles, mammals or digital systems - what are the criteria by which we can first distinguish consciousness? Pixy says any SRIP is sufficient, which has met with some disagreement.

Are you suggesting any consciousness requires abstract representation (symbology?) as a minimum?
 
Last edited:
My experiences can be broken down into combinations of elementary subjective qualities such as "blue", "cold", "sweet", "bright", "rough", etc.

No, they can't. Each memory is a composite of all of the "qualities" that were part of the original event. You can't divide them into "components". That's one of the reasons why memory is associative. It's not a big database where you can search for "cold".

Alright, now its clear that you're just being contrary for contrary's sake. Tell me, in your experience, is the sensation of "cold" identical to, say, the flavor of "bitter"? :rolleyes:

Its those basic subjective qualities brought together into an experience at any given time that we're referring to as "qualia".

What about "square root" ? Is that a qualia, too ? Sure, it's easy to call qualia those things you can easily see, but what about abstract concepts ?

Last I checked, I couldn't 'easily see' the sensation of nausea either, but its still a quale. At least in my experience, abstract concepts are usually manifested in my mind as some learned symbol(s) which encapsulates some sense(s) of the concept(s) I've been trained to associate them with. When one is dealing with numeric concepts, it seems that even the barest subjective blips will do, so long as one maintains the appropriate syntactical relations between them. Its like organizing subjective tally marks, or beads. It doesn't matter which quale or combinations of qualia one employs,; so long as they are ordered and manipulated property in one's awareness they will suffice.

"Qualia" isn't a postulated hypothetical but a categorical label of for an indisputable given: We experience and our experiences can be reduced to combinations of subjective qualities. Period. Its that simple.

Your say-so doesn't make it true.

AMM: "'Numbers' aren't a postulated hypothetical but a categorical label of for an indisputable given--"

Belz...:"Hmph! Your say-so doesn't make it true x-P "

:rolleyes:
 
Is that the best you can do RD?

Pretty much.

Because quite frankly -- no pun intended -- I don't think you have even a clue about what I am talking about. At least, you have not demonstrated it thus far. In particular, your current argument regarding my statements being "contradictory" resembles something a mediocre first year philosophy student might try to piece together after reading a passage they didn't fully understand yet 'still needed to get the assignment done.' I remember turning in book reports like that back in grade school -- hey I didn't read the whole book, just a chapter or two, so I threw some words together and tried to make it sound "existential" or something, so the teacher would have a hard time figuring out that I was clueless.

So in an effort to decide whether continued communication with you on this subject can ever be fruitful, I am querying you regarding how much relevant education you have.

The thought I am having right now is that if you can't grasp the meaning of well formed statements referencing scientific concepts -- including those of the mathematical sciences -- then I am not sure spending time formulating responses to anything you ask is worth it.
 
Last edited:
What do you mean?
No big deal - just that RD asked you what he thought might be a sensitive question, but didn't explain why he wanted to know - how it was relevant to the discussion. I'd have wanted to know the reason for it. YMMV.
 
No big deal - just that RD asked you what he thought might be a sensitive question, but didn't explain why he wanted to know - how it was relevant to the discussion. I'd have wanted to know the reason for it. YMMV.

I'm pretty sure you are smart enough to guess why I would ask a question like that. I'm pretty sure Frank is as well. I'm also pretty sure that's why Frank didn't answer it.
 
I'm pretty sure you are smart enough to guess why I would ask a question like that. I'm pretty sure Frank is as well. I'm also pretty sure that's why Frank didn't answer it.
I'm old fashioned - I'd rather try to establish the problem area than imply that a formal science education is required to understand the topic - which is how it comes across without clarifying explanation.
 
Now I'm confused. What do you think changes in a child between the 'goo-goo' stage and the 'I go potty' stage to change it from unconscious to conscious?

And do you mean that all non-verbal humans are unconscious?

I am not using the medical definition of conscious when I talk about consciousness.
I am defining consciousness as being aware of ones own thinking. Thinking being defined as verbal cognition.

What changes when a child can say "I" is that it can now differentiate abstractly between itself and everything else.
 
rocketdodger said:
Is that the best you can do RD?

Pretty much.

Because quite frankly -- no pun intended -- I don't think you have even a clue about what I am talking about. At least, you have not demonstrated it thus far. In particular, your current argument regarding my statements being "contradictory" resembles something a mediocre first year philosophy student might try to piece together after reading a passage they didn't fully understand yet 'still needed to get the assignment done.' I remember turning in book reports like that back in grade school -- hey I didn't read the whole book, just a chapter or two, so I threw some words together and tried to make it sound "existential" or something, so the teacher would have a hard time figuring out that I was clueless.

So in an effort to decide whether continued communication with you on this subject can ever be fruitful, I am querying you regarding how much relevant education you have.

The thought I am having right now is that if you can't grasp the meaning of well formed statements referencing scientific concepts -- including those of the mathematical sciences -- then I am not sure spending time formulating responses to anything you ask is worth it.


You don't recognize the argument I posed?
 
dlorde said:
Will I will start by drawing the line at language . More specifically abstract representations. In humans that would be the first time we recognize the abstract representation essential to all others. That would be when we say "I" for the first time. Approximately the age when we have mastered basic language and we start thinking. By thinking I mean verbal cognition.

How's that?
That is useful, although I was referring more to the criteria for distinguishing minimal consciousness from non-consciousness, i.e. to decide where on the scale of complexity consciousness is first manifest in a system - be it in microbes, plants, insects, reptiles, mammals or digital systems - what are the criteria by which we can first distinguish consciousness? Pixy says any SRIP is sufficient, which has met with some disagreement.

Are you suggesting any consciousness requires abstract representation (symbology?) as a minimum?

Yes, for the definition of consciousness I used in my last post.

We can recognize consciousness through behavior which indicates the ability to differentiate abstractly between subject and object.

Memory seems to play an important role in this ability.
The ability to become aware of a continuity of my relationship with an object even when its no longer experienced.
When children start claiming objects as theirs not just when they are experiencing it, but when someone else might.
 
It really doesn't make any difference, in principle. Whether or not animals are conscious - and if they are, which animals - doesn't change anything essential about the discussion.
Don't you think there may be range of levels of consciousness across the animal kingdom? (I refer to animals because the most obviously conscious entities are animals). If so, maybe we can learn about the structural requirements for consciousness by studying creatures at the boundary between consciousness and non-consciousness... given reasonable criteria for identifying it.

Clearly the mechanics of possible animal consciousness would be similar to that of human consciousness anyway.
Similar 'mechanics' - how? Do you mean a CNS with a similar basic architecture?

"What about animals - are they conscious?" is usually a way to create a diversion.
I see it as a step to establishing a common understanding of what we mean by 'consciousness' in a particular discussion. Some people think consciousness is SRIP in any system, some that it necessarily requires life, some that it's solely a human facility. Clearly there is a major semantic difference to be resolved. I'm curious to know where the other contributors here draw the line, if they can say.
 
Yes, for the definition of consciousness I used in my last post.

We can recognize consciousness through behavior which indicates the ability to differentiate abstractly between subject and object.
OK, that's a start.

Behaviourally an ant can differentiate between self and other, and can distinguish various kinds of 'other' and treat them accordingly; some level of abstraction is necessary for this behaviour. Is this differentiating abstractly between subject and object? If not, can you suggest a simple example?

Memory seems to play an important role in this ability.
The ability to become aware of a continuity of my relationship with an object even when its no longer experienced.
Children generally develop a sense of object persistence at around 8-12 months.

When children start claiming objects as theirs not just when they are experiencing it, but when someone else might.
Competition for favoured items is generally present by around 18 months...

There are many stages of cognitive and social development in the first two years; at which stage would you suggest consciousness first appears?
 
Don't you think there may be range of levels of consciousness across the animal kingdom? (I refer to animals because the most obviously conscious entities are animals). If so, maybe we can learn about the structural requirements for consciousness by studying creatures at the boundary between consciousness and non-consciousness... given reasonable criteria for identifying it.


Similar 'mechanics' - how? Do you mean a CNS with a similar basic architecture?


I see it as a step to establishing a common understanding of what we mean by 'consciousness' in a particular discussion. Some people think consciousness is SRIP in any system, some that it necessarily requires life, some that it's solely a human facility. Clearly there is a major semantic difference to be resolved. I'm curious to know where the other contributors here draw the line, if they can say.

westprog won't answer that question. Many people have asked him/her over the years and the response is always avoidance.

You are probably clever enough to figure out why
 
You don't recognize the argument I posed?

I recognize that it is a really bad attempt at constructing something similar to a portion of the standard proof of Godel's first theorem of incompleteness.

That's my point -- you seem to think that just "following the pattern" of a mathematical theorem, but replacing words with anything you want, leads to proof of any ol' abstract concept you are arguing in favor of.

This is incorrect. In this case, formulating a Godel string is nontrivial, and requires a bunch of prep work. In particular, you have to define everything you are going to use in the string, formally,, and make sure the string is well formed I.E. follows the set of rules used to derive statements of the system, etc.

In other words, if
if I am in a simulation then it's not true that if my word "orange" does mean something, then it means orange oranges.
is a Godel string then I am Mickey Mouse.

Furthermore I don't see how it has any bearing on whether or not we actually are in a simulation.
 
Last edited:
westprog won't answer that question. Many people have asked him/her over the years and the response is always avoidance.

You are probably clever enough to figure out why

Perhaps - but I'm a sceptic, and that's hearsay, so I have to ask and wait to see what the response is myself is before drawing any conclusion :D
 
rocketdodger said:
You don't recognize the argument I posed?

I recognize that it is a really bad attempt at constructing something similar to a portion of the standard proof of Godel's first theorem of incompleteness.

That's my point -- you seem to think that just "following the pattern" of a mathematical theorem, but replacing words with anything you want, leads to proof of any ol' abstract concept you are arguing in favor of.

This is incorrect. In this case, formulating a Godel string is nontrivial, and requires a bunch of prep work. In particular, you have to define everything you are going to use in the string, formally,, and make sure the string is well formed I.E. follows the set of rules used to derive statements of the system, etc.

In other words, if
if I am in a simulation then it's not true that if my word "orange" does mean something, then it means orange oranges.
is a Godel string then I am Mickey Mouse.

Furthermore I don't see how it has any bearing on whether or not we actually are in a simulation.


Incorrect guess. Try again. Would you like a clue?
 
Easy question. Easy answer. Just pose the question and wait for 113 pages of word salad to be posted. Then scan page 114. If it's still word salad, then the answer is a big fat NO.
 
Frank, I don't want to offend you, but do you have any formal education at all in subjects like mathematics, physics, chemistry, biology, computing, etc ... anything a university would put in its "school of science?"

If you have a formal education in math, you should sue your college:

Why Malerin is Wrong About Bayes Theorem

GreedyAlgorithm said:
since we exist we know P(E|H) + P(E|~H) must sum to 1.
buh buh what?

This is just very, very wrong. I can't even figure out what went wrong in your head to make you think it is right. Can you tell us what you think E and H are so we can help you figure out your mistake?
 
Easy question. Easy answer. Just pose the question and wait for 113 pages of word salad to be posted. Then scan page 114. If it's still word salad, then the answer is a big fat NO.
I was going to forego the foregone conclusion for a little longer, and I must say that I'm not quite sure it's fair to call it word salad - at least not all of it. Some of these guys know more about epistemology and biology and psychology and cybernetics than I ever did or will, and argue their points with vigor, authority and precision. That is what makes the utter hopelessness of agreement so entertaining.
 
rocketdodger said:
Incorrect guess. Try again. Would you like a clue?

If it was something I cared about, then I would say yes, I would like a clue.

So no, I wouldn't like a clue.


I'll just tell you then:

Hilary Putnam

In philosophy, the computational theory of mind is the view that the human mind ought to be conceived as an information processing system and that thought is a form of computation. The theory was proposed in its modern form by Hilary Putnam in 1961[citation needed] and developed by Jerry Fodor in the 60s and 70s.[1] This view is common in modern cognitive psychology and is presumed by theorists of evolutionary psychology.

The computational theory of mind is a philosophical concept that the mind functions as a computer or symbol manipulator. The theory is that the mind computes input from the natural world to create outputs in the form of further mental or physical states. A computation is the process of taking input and following a step by step algorithm to get a specific output. The computational theory of mind claims that there are certain aspects of the mind that follow step by step processes to compute representations of the world.


Wait, there's more:

In the late 1980s, Putnam abandoned his adherence to functionalism and other computational theories of mind. His change of mind was primarily due to the difficulties that computational theories have in explaining certain intuitions with respect to the externalism of mental content. This is illustrated by Putnam's own Twin Earth thought experiment (see Philosophy of language).[11] He also developed a separate argument against functionalism in 1988, based on Fodor's generalized version of multiple realizability. Asserting that functionalism is really a watered-down identity theory in which mental kinds are identified with functional kinds, Putnam argued that mental kinds may be multiply realizable over functional kinds. The argument for functionalism is that the same mental state could be implemented by the different states of a universal Turing machine.

http://en.wikipedia.org/wiki/Hilary_Putnam


Putnam's Brain in a Vat thought experiment is what the post you derided is largely based on.

Please try to have a little fun RD. I am.
 
Status
Not open for further replies.

Back
Top Bottom