• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
As a way of getting my bearings in this thread, and possibly shedding some light on how each of us is thinking about qualia, subjective experience, and consciousness, I'd like it if we can answer these two yes/no questions:

Are qualia a necessary consequence of consciousness?

(I.e., wherever we find consciousness, there will we find qualia. Symbolically: C --> Q)

Is consciousness a necessary consequence of qualia?

(I.e., wherever we find qualia, there will we find consciousness. Symbolically: Q --> C)

I want to know what sorts of logical entailment, if any, different participants see between qualia and consciousness.

Short answer: Qualia are to consciousness as quanta are to matter.
 
"For some the spell lasted only while the voice spoke to them, and when it spoke to another they smiled, as men do who see through a juggler's trick while others gape at it."

My advice: don't waste any more of your time.

Good advice. I'll leave'em to it.
 
rocketdodger said:
I've bolded the part you seem to be having a hard time with.

Were we (possibly) in a simulation with no connection whatsoever to the external world such an area of communicative commonality might be difficult to establish.

Why?


Seems self evident assuming one is using words and sentences. Or are you thinking of communication taking place mathematically?

Would logical axioms necessarily be interpreted the same everywhere?

For example, the computer running the simulation that you might be in could be totally haywire and spitting out random inconsistencies :D

rocketdodger said:
I hope that helps.

Nope. In fact every new post of yours makes me wonder more and more how you can reach the conclusions you do.


Not sure what I am saying differently now than from what you recently said had been your position from the beginning:

rocketdodger said:
You can invalidate Putnam's deduction by claiming there is no connection whatsoever between the real/external world and the simulated/vat world (that you were in?). Words wouldn't represent anything in the simulated/vat world and there'd be no foothold to carry out the deduction Putnam proposes.

But your simulation argument wouldn't mean anything then either.

Frank this has been my position from the very beginning. I am sorry you misunderstood.
 
Just like any abstract concept? if we define a unicorn or a flying spaghetti monster it doesn't mean they exist, except as concepts.

Not sure that's worth 2 cents...

A unicorn is not an abstract concept.

The concept of a unicorn may be an abstract concept, but then so is the concept of a horse.
 
Last edited:
As a way of getting my bearings in this thread, and possibly shedding some light on how each of us is thinking about qualia, subjective experience, and consciousness, I'd like it if we can answer these two yes/no questions:

Are qualia a necessary consequence of consciousness?

(I.e., wherever we find consciousness, there will we find qualia. Symbolically: C --> Q)

Is consciousness a necessary consequence of qualia?

(I.e., wherever we find qualia, there will we find consciousness. Symbolically: Q --> C)

I want to know what sorts of logical entailment, if any, different participants see between qualia and consciousness.

ETA: On second thought, I'm not sure this will get us anywhere since "qualia" is still an ill-defined term.

Well, it depends. If you associate consciousness inextricably with having subjective experience, then yes, qualia are inherently part of consciousness.

If you decide to leave subjective experience on one side and rely entirely on external behaviour, then qualia have nothing to do with your experience of consciousness. But such a definition isn't particularly helpful.

There are of course people who don't care whether the talking robot (should one exist) can feel anything. If it talks, if it's able to interact with us, then we don't need to even consider its subjective experience. IMO that's ignoring the difficult problem just because it's difficult. Subjective experience is the big issue of consciousness.
 
That looks fine
Rocketdodger, have you read (and understood) Hofstadter's GEB?

I ask because your notion of "SRIP" essentially bears no relationship to the kind of ideas he builds up to in that book (albeit often using a lot of imagery and metaphors rather than anything more precise or concrete). I was under the impression that you and Pixy were very much "on the same page" when it comes to what SRIP is actually meant to be and that was something along the lines of what Hofstadter galks about in GEB, but it seems not. In contrast, dlorde does seem to be thinking along the lines of the ideas in GEB.

Going back to the full working srip.cpp implementation that I posted earlier, that actually has to be compiled and run on some physical machine to be conscious (according to you). Right?

But we could also easily translate that piece of code into any one of a number of other languages (and even BF) that don't directly support classes or even structures. I claim you could even translate it into a simple "physical system" along the same lines as Turing did, as below.

Instruct a person to follow a simple procedure;

"There is a box on the table in front of you. There is a piece of string glued at one end to the bottom of the box (on the outside). The other end of the string has a round red sticky ball that can be used to attach that end to other objects or even perhaps the same box. Go to the box. Find the end of the string attached to the bottom of the box and follow it to the end with the red sticky ball. If that red sticky ball is stuck to the same box that you started from then put a stone into the box (there will be some on the table), otherwise remove any stones from the box leaving it empty. Thanks."

So, what you are saying is that when a person executes these instructions (or a mechanical device does essentially the same thing) then the box/string/stone thingy is conscious when a stone is being placed into the box, but is not when the box is emptied (because the string was attached to something else or even nothing).

However, the C++ reference to a class instance or a piece of string tied to a box doesn't achieve anything in terms of creating a conscious system. The box doesn't know or care if it has a piece of string tied to it or whether that piece of string is connected at the other end to it again. Ditto for a C pointer to a structure, or more generally some value in a particular memory location in your PC, as a result of compiling/translating srip.cpp to another programming language or machine code for some architecture. It's only some additional external interpretation that you (the programmer) have in your mind that could possibly be wanting to call that a "self-reference" or even SRIP.

If you still believe that srip.cpp is conscious when running then how about "compiling" that small chunk of code to an artificial neural network for us (without any of the baggage that is buried in C++ that isn't essential to this claimed example of SRIP)? You sound like you know quite a lot about such things so I'm sure that won't be hard for you. However, I'm also sure that if you do this you'll also see how completely vacuous your notion is, especially if you let someone else try to interpret what the resulting ANN is "doing" when it runs and without you giving them any "hints". You seem to be confusing your own interpretation of what things mean with what they might actually "mean" if they were just left to "do their own thing".
 
If you still believe that srip.cpp is conscious when running then how about "compiling" that small chunk of code to an artificial neural network for us (without any of the baggage that is buried in C++ that isn't essential to this claimed example of SRIP)? You sound like you know quite a lot about such things so I'm sure that won't be hard for you. However, I'm also sure that if you do this you'll also see how completely vacuous your notion is, especially if you let someone else try to interpret what the resulting ANN is "doing" when it runs and without you giving them any "hints". You seem to be confusing your own interpretation of what things mean with what they might actually "mean" if they were just left to "do their own thing".

The actual example of what SRIP is supposed to be shows how shallow the concept is. There's simply no possible reason to suppose that a piece of C++ code such as that shown has anything whatsoever to do with consciousness. It's just endless assertion.
 
Short answer: Qualia are to consciousness as quanta are to matter.

Your answer is remarkably unhelpful (though quite pithy, so that must count for something, right?)

First--it's suspect: it's not clear from the answer how much you understand about quantum theories.

Second--it's esoteric: it depends on the reader having a similar understanding of quantum theories.

Third--it's most probably a complete analogy fail.

Without the benefit of more explanation, here's how I parse it:

Quanta are discrete packets of energy with definite magnitude that move from one physical entity to another. By analogy, qualia are discrete packets of [experience?] with definite magnitude that move from one [consciousness?] to another.

Now tell me how I misinterpreted your analogy.
 
Last edited:
Rocketdodger, have you read (and understood) Hofstadter's GEB?

I ask because your notion of "SRIP" essentially bears no relationship to the kind of ideas he builds up to in that book (albeit often using a lot of imagery and metaphors rather than anything more precise or concrete). I was under the impression that you and Pixy were very much "on the same page" when it comes to what SRIP is actually meant to be and that was something along the lines of what Hofstadter galks about in GEB, but it seems not. In contrast, dlorde does seem to be thinking along the lines of the ideas in GEB.
Naw we are all on the same page, more or less. You are just confusing frames -- you are thinking srip.cpp is somehow implied to be SRIP in a frame that it is not.

Going back to the full working srip.cpp implementation that I posted earlier, that actually has to be compiled and run on some physical machine to be conscious (according to you). Right?
Well, yes, but only because that is the only way for a "process" to occur.

But we could also easily translate that piece of code into any one of a number of other languages (and even BF) that don't directly support classes or even structures. I claim you could even translate it into a simple "physical system" along the same lines as Turing did, as below.

Instruct a person to follow a simple procedure;

"There is a box on the table in front of you. There is a piece of string glued at one end to the bottom of the box (on the outside). The other end of the string has a round red sticky ball that can be used to attach that end to other objects or even perhaps the same box. Go to the box. Find the end of the string attached to the bottom of the box and follow it to the end with the red sticky ball. If that red sticky ball is stuck to the same box that you started from then put a stone into the box (there will be some on the table), otherwise remove any stones from the box leaving it empty. Thanks."
Sure.

So, what you are saying is that when a person executes these instructions (or a mechanical device does essentially the same thing) then the box/string/stone thingy is conscious when a stone is being placed into the box, but is not when the box is emptied (because the string was attached to something else or even nothing).
Not exactly -- this is where you are confusing frames. Something there is exhibiting SRIP, but what frame it is in and what exactly the system is composed of is not clear. I would say that if you are looking at the current frame, the system needs to include the person or machine running the computations, but all of a sudden then the system is always referencing self. If you look at some other frame, abstracted away from what is "doing" the computations, then SRIP like I discussed becomes evident.

You might not think this is a valid answer, but consider that if we ran a full precision simulation of you, the computer would be the thing "doing" the calculations at the lowest level yet "you" would be fully conscious in the simulation ( or at least you would act like it, in terms of SRIP ). In fact, since we don't know the "cause" of the physical laws of our universe, we can't say anything "does" anything on its own in the sense you are speaking of. There could just as well be some deeper cause of particle behavior beyond what we have access to and all we see are the results, which we merely find patterns in (and call those patterns mathematics ). So it would be correct for me to say that the "universe" "causes" a car to roll, just like the person in your example "caused" the rock to be in the box, etc, when viewed from certain frames. Since we are in the same frame as the car, however, it makes more sense to us to say that the car just "rolls" -- we are not exposed to whatever "caused" it to happen.

However, the C++ reference to a class instance or a piece of string tied to a box doesn't achieve anything in terms of creating a conscious system. The box doesn't know or care if it has a piece of string tied to it or whether that piece of string is connected at the other end to it again. Ditto for a C pointer to a structure, or more generally some value in a particular memory location in your PC, as a result of compiling/translating srip.cpp to another programming language or machine code for some architecture. It's only some additional external interpretation that you (the programmer) have in your mind that could possibly be wanting to call that a "self-reference" or even SRIP.
Well, yeah -- because we are humans and we came up with the phrase SRIP. For us to call anything SRIP requires interpretation.

But don't you think, for example, that bacteria would get along just fine if humans hadn't come up with the phrase SRIP?

You are confusing a set of behaviors with the human representation of such sets of behaviors. The actual behaviors exist independently of any human's ability to categorize and recognize them.

If you still believe that srip.cpp is conscious when running then how about "compiling" that small chunk of code to an artificial neural network for us (without any of the baggage that is buried in C++ that isn't essential to this claimed example of SRIP)? You sound like you know quite a lot about such things so I'm sure that won't be hard for you. However, I'm also sure that if you do this you'll also see how completely vacuous your notion is, especially if you let someone else try to interpret what the resulting ANN is "doing" when it runs and without you giving them any "hints". You seem to be confusing your own interpretation of what things mean with what they might actually "mean" if they were just left to "do their own thing".
This is what you don't understand. The set of behaviors -- or behavior, since a set of behaviors is also a behavior -- that we call SRIP exists independently of your ability to see it. When you look at a computer running a simulation, all you see at the lowest level is transistor states flipping back and forth. Thats because in our frame, the simulation is merely a bunch of transistors -- the computer. But on top of that there is a simulation frame, where the things in the simulation exist in a different sense than in the frame below. And the fact is that regardless of how the simulation frame is supported -- whether it be a computer or a guy with boxes, a string, ball etc -- as long as the causality in the frame originates in the frame from the perspective of systems in the frame, behaviors can be said to exist in the frame just as they can be said to exist in our own frame.

This is quite obvious if you just take a second to think about it. You yourself have no direct access to how your neurons work. You don't even have direct access to your own neurons, or even groups of them. They are in a frame below what you consider your consciousness. The frame you exist in and observe the world from is not the same frame as where particles exist. When we look at a tunneling electron microscope or particle accelerator collision images, we are looking at the mapping of one of those lower frames to our observational frame. The world of particles, and even our own neurons, is and always will be something we can only indirectly gain access to. Furthermore, if you think about it, we won't ever have access to other people's neurons in the sense that we have access to our own frame. The way we, as SRIP in a certain frame, have access to others is via human communication and interaction. You can stare at an MRI all you want and never know what exactly is going on in there, yet when you speak with them you are instantly aware of their consciousness.

And that is the whole basis for the computational model. We don't think that individual particles have anything to do with consciousness, even though obviously a biological neural network is nothing but a whole bunch of particles. There are behaviors above that level, in the frame of network itself, that we think are responsible. If a neuron was "implemented" by a guy with a set of boxes and a ball on a string, it wouldn't matter to the consciousness that emerged from the network. It might matter to you, observing it from the outside, but that is irrelevant.

For what it is worth, I find it interesting that you question whether or not I have read GEB when clearly you have missed the entire point of the most famous passage from that book -- the conversation between the Tortoise, Achilles, and the Anteater about talking to an anthill.
 
Last edited:
The actual example of what SRIP is supposed to be shows how shallow the concept is. There's simply no possible reason to suppose that a piece of C++ code such as that shown has anything whatsoever to do with consciousness. It's just endless assertion.

I agree, there is no reason to suppose.

Thats why nobody here -- besides yourself and the other anti-computationalists -- is talking about supposing the conclusion.

Traditionally, in a proper argument, all the suppositions are limited to some simple premises of the argument. Then logical inference is applied recursively until a conclusion is reached.

I understand that you are trying to abolish that standard, but I'm sorry it isn't going to be easy since after all it is how most sane people think about the world.
 
Last edited:
Seems self evident assuming one is using words and sentences. Or are you thinking of communication taking place mathematically?

Would logical axioms necessarily be interpreted the same everywhere?

For example, the computer running the simulation that you might be in could be totally haywire and spitting out random inconsistencies :D

Frank I am talking about the case of both you and I being in the same simulation. I thought that was clear -- was it not?

Obviously, if we are in different ones, or if I am and you are not, then everything I said doesn't apply.
 
Your answer is remarkably unhelpful (though quite pithy, so that must count for something, right?)

First--it's suspect: it's not clear from the answer how much you understand about quantum theories.

Second--it's esoteric: it depends on the reader having a similar understanding of quantum theories.

Third--it's most probably a complete analogy fail.

Without the benefit of more explanation, here's how I parse it:

Quanta are discrete packets of energy with definite magnitude that move from one physical entity to another. By analogy, qualia are discrete packets of [experience?] with definite magnitude that move from one [consciousness?] to another.

Now tell me how I misinterpreted your analogy.

I don't think it's possible to exactly quantify how one element of experience can be isolated from the totality of experience. However, for any given complex experience, it can be considered as being composed of multiple sub-experiences, which are qualia. We don't know what the sub-experiences are, and we have to allow for the possibility that experience doesn't break up in this way at all. However, that just means that the complex experience is in fact not able to be broken up - hence qualia are just the successive experiences.
 
:bwall

Belz, is there a word in your native tongue for "obstinate blockhead" and is it considered a productive pastime to attempt to engage such individuals in reasonable discussion?

Your continuing avoidance certainly seems productive to you. :rolleyes:

And just what am I continuing to "avoid", Belz? You yourself claim that you can't follow the points being presented to you and yet here you are trying to argue against them. Quite frankly, I'm getting really tired of holding your hand and spoon feeding you every step of the way -- point-by-point. All I get from you in response is your braying and whining about about how I've not explained anything as you continue to try and articulate arguments against what you claim to not understand. You're just chasing windmills and not making any real effort to comprehend what is being said. Do you not see the absurdity of your own behavior right now? Really?

Tell you what: If you can demonstrate the ability clearly summarize what my actual position is I'll continue this discussion with you. If you are unable to do that then you're just wasting both our time.
 
Last edited:
Short answer: Qualia are to consciousness as quanta are to matter.

Your answer is remarkably unhelpful (though quite pithy, so that must count for something, right?)

First--it's suspect: it's not clear from the answer how much you understand about quantum theories.

Second--it's esoteric: it depends on the reader having a similar understanding of quantum theories.

Third--it's most probably a complete analogy fail.

Without the benefit of more explanation, here's how I parse it:

Quanta are discrete packets of energy with definite magnitude that move from one physical entity to another. By analogy, qualia are discrete packets of [experience?] with definite magnitude that move from one [consciousness?] to another.

Now tell me how I misinterpreted your analogy.

I'm saying that "qualia" are what our experiences reduce to. However, instead of their defining properties being discrete magnitudes their essence lies in being distinct qualities [as per the dictionary definition of a 'quale' being "a sense-datum or feeling having some distinctive quality"]. These distinct qualities [e.g. distinct sensations, feelings, and other perceptions] are combined to form our experiences of any given moment. Its really that simple.
 
Last edited:
But don't let that fool you, Philosaur: It's not like qualia can be defined or tested for. You just have to know that they're there.

"Don't let that fool you"...? "It's not like qualia can be defined or tested for"...?

Genius, qualia are what make up all of your observations, they are the only things you directly perceive -- heck, they ARE your perceptions. Are you truly so utterly dense that after all this time you -still- can't comprehend what the word "qualia" refers to? Seriously, what the hell is wrong with you?
 
Last edited:
I'm saying that "qualia" are what our experiences reduce to. However, instead of their defining properties being discrete magnitudes their essence lies in being distinct qualities [as per the dictionary definition of a 'quale' being "a sense-datum or feeling having some distinctive quality"]. These distinct qualities [e.g. distinct sensations, feelings, and other perceptions] are combined to form our experiences of any given moment. Its really that simple.

As I thought: catastrophic analogy failure.

Problem 1: Matter does not reduce to quanta. A quantum is a "packet" of energy that effects physical interaction.

Problem 2: Your notion of "distinct" qualities is incoherent. For instance, when I'm looking at a blue-green coffee mug, are there two qualia involved (one for blue, one for green) or just a single blue-green quale? Is my perception of the size of the mug a separate quale? Is the shape of the mug a distinct quale, or is it a set of component qualia? If I look at the mug over time, is there a sequence of identical qualia entering and exiting my mind, or is there a single quale sort of "hanging around"? Does the quale morph as I move around the mug?

Maybe your subjective experience comes to you in distinct packets that you can identify as "qualia". Mine does not. It's as simple as that.

"Don't let that fool you"...? "It's not like qualia can be defined or tested for"...?

Genius, qualia are what make up all of your observations, they are the only things you directly perceive -- heck, they ARE your perceptions. Are you truly so utterly dense that after all this time you -still- can't comprehend what the word "qualia" refers to? Seriously, what the hell is wrong with you?

You can shout it, punctuate it with ad homs, act as incredulous as you like--it doesn't change the fact that what you see as being obvious is anything but.

I find the idea that my experiences can be reduced to component qualia ridiculous. I *do not* perceive the world as a bundle of qualia, regardless of how many times you want to scream and yell and insist that it is so. My experience of the world is a unified, ubiquitous presentation. I can mentally reduce the presentation into component parts, either by their edges, their locations, their logical structure, their emotional resonance, or any number of other criteria. But I find that no matter how hard I try, I can't decompose the presentation into component qualia. Can you?
 
I find the idea that my experiences can be reduced to component qualia ridiculous. I *do not* perceive the world as a bundle of qualia, regardless of how many times you want to scream and yell and insist that it is so. My experience of the world is a unified, ubiquitous presentation. I can mentally reduce the presentation into component parts, either by their edges, their locations, their logical structure, their emotional resonance, or any number of other criteria. But I find that no matter how hard I try, I can't decompose the presentation into component qualia. Can you?

In which case, a single qualia (qualius? qualium) would be the total individual experience. That's fine. There's no rule to say that there's a one-to-one match between sensory input and subjective experience.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom