• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Frank Newgent said:
rocketdodger said:
How will your end of the future of this conversation be different from what it would be if you believed me RD?

At the very least, I typed different characters on my keyboard.

Is that not action?

I plan to type different characters than I would otherwise.

Is that not intent to act?

Your claim was that belief is not equivalent to intent to act. That claim is unverifiable -- any belief you hold is going to affect your intent to act. Even if you purposefully don't intend to act any differently, that is intending to act as if you were not intending to act differently.


Lost you there.

You want an example of something concerned not with the belief/truth of propositions but rather with the intent/value of actions?


You should be very suspicious of people who are so obsessed with objective "good."


ETA: reason you and I got off on this tangent was me wondering how one might formalize normative statements such as your last quote.


hey RD, what... did I say something wrong?

Again, I went off on the practical reason/theoretical reason tangent hoping that it might help to illustrate the normative/positive-descriptive difference when you continued to say you didn't understand what I meant by normative.

With your unambiguously normative quote "you should be very suspicious of people who are so obsessed with objective 'good'" I assumed it would become clear first-hand what a normative statement is.

How one might formalize normative statements such as your quote?
 
hey RD, what... did I say something wrong?

Again, I went off on the practical reason/theoretical reason tangent hoping that it might help to illustrate the normative/positive-descriptive difference when you continued to say you didn't understand what I meant by normative.

With your unambiguously normative quote "you should be very suspicious of people who are so obsessed with objective 'good'" I assumed it would become clear first-hand what a normative statement is.

How one might formalize normative statements such as your quote?

By creating moral doctrines.
 
hey RD, what... did I say something wrong?

No, I am just busy these days.

How one might formalize normative statements such as your quote?

Any statement "X should Y" carries an implicit " .... if X wants Z" with it.

Whether someone says it or not, it is there -- even if only in the head of the person who made the statement.

So for me to say "you should be very suspicious of people who are so obsessed with objective 'good'" is really saying " you should be very suspicious of people who are so obsessed with objective 'good' if you care about people disclosing their true intentions with you, and I assume you do," etc.

Already, we are well on the road to complete formalization. In the expanded context, such as that above, "you should" clearly means something like "the behavior with the highest probability of resulting in reaching your goal, all else being equal, in my opinion, is to ..."

So now we have :

"the behavior with the highest probability of resulting in reaching your goal, all else being equal, in my opinion, is to be very suspicious of people who are so obsessed with objective 'good' if you care about people disclosing their true intentions with you, and I assume you do"

Just to get you further along, I will formalize some of the other terms in there.

"in my opinion" can be formalized to "according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists .. ... because such people are often motivated by motives that are not immediately apparent..."

"if you care about" can be formalized to "if it would benefit you somehow, even in a purely psychological manner, for .. to ..."

So now we have:

"the behavior with the highest probability of resulting in reaching your goal, all else being equal, according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists, is to be very suspicious of people who are so obsessed with objective 'good' if it would benefit you somehow, even in a purely psychological manner, for people to disclose their true intentions with you, and I assume you do, because such people are often motivated by motives that are not immediately apparent"

Should I go on?

I never said it would be short and sweet. But it can be done.
 
How would you assess 'a little bit of consciousness'? To my understanding, in assessing consciousness, we look for a complex set of behaviours and responses that we associate with consciousness in ourselves, and arbitrarily ascribe the degree of consciousness according to the number and extent of those consciousness-associated behaviours & responses we can identify.
Are you aware of when you are conscious? At this point I don't really care how this internal state (or lack of it) may appear to another. I simply want to know what "happens" to turn a bunch of atoms/quarks/superstrings from a system that has no "feeling" (perhaps like Pixy judging by the (lack of) evidence he is presenting? :eye-poppi) into something that would call itself conscious - if it had the ability to do that which it quite possibly doesn't!

Yes, there is such a thing as a 'little bit of consciousness". If you doubt this, then spend some time in an ICU. Within the areas that we identify in medicine, there are states referred to as 'vegetative', 'minimally conscious', 'stupor', etc. There are, at least, three different systems that we refer to as 'causing consciousness' and each can be interrupted in different ways.
Yes. That's all fine but I'm interested in what this hypothetical (possibly artificial) "neural network" is feeling and aware of itself. Most seem to agree that at some point (e.g. single neuron) it feels and is aware of absolutely nothing - in the same sense that a single electron presumably feels nothing, or even how a small volume of empty space "feels nothing" and is unaware of anything. But at some later stage when things get big enough and are correctly connected, "feeling" (of some kind, and for lack of a better word) appears. We may not be able to precisely detect this transition point externally (who knows what an ant may be feeling or whether it feels anything) but it must exist if the thing we call consciousness is only a result of that underlying network.

This really depends on the definitions one is using. When many people here think of "consciousness" they imagine something like what we humans experience. But the human experience obviously requires many components such as a whole spectrum of perception, a body map, an emotional infrastructure, memory, etc. Obviously all those things cannot be had with a single neuron.

And even if your definition is something along the lines of what Pixy and I use -- simple self reference -- you run into the problem of "what is self reference?"
Seems like you'd have to have a "self" before it could be referenced? Let's say I write a small program that has some kind of internal model of itself (or perhaps just the physical container that it resides within?), and it has some minimal interactive input/output functionality, plus perhaps some kind of ability to learn and change it's behaviour (let's say in response to inputs of "good program!" and "bad program!"). Will that system be "aware"? Is it "conscious"?

Conundrums like this are why Pixy and I prefer to stay away from the vague term "consciousness" and focus on the behavior of the system. What can a single neuron do? Clearly not much. Who cares if it is "conscious" or not?
Dodging the conundrum doesn't make it go away. Do you care if you are conscious or not? I personally would care very much to know if a particular extremely small network was conscious even if there was no reliable way to directly observe this externally. (Presumably I would have to be convinced by an extremely strong argument that it must be conscious.)

Which is an illusion.
Fine, if that's the way you want to view it, but in that case I want you to explain how that illusion is generated. How does a bunch of atoms get to have feelings (or even illusions of feelings)? You have said smaller networks have no ability to be "conscious". In other words these are utterly and completely without any sense of self, and are not in any way subject to this illusion. Suddenly (and it really does have to be "suddenly") when the network reaches some minimum size/complexity, then you agree the network can start to feel the effects of this "illusion". I want to know what happens at that particular point. How does "that feeling" get generated.

I do appreciate the various responses (above) to my earlier post about a one node (neuron) network versus multiple neuron network, and point at which "consciousness" is possible, etc., but the thing that I am really interested in understanding has nothing to do with the behaviours that someone observing the network from an external point of view may or may not notice. Those behaviours may be strong indicators that something like consciousness is happening within the observed system but unless you "are the system" then you don't know for sure. And in particular, if that system if not based on something close to a human brain then we have even less chance of knowing how "self-aware" it may be. I know how my consciousness feels to me, illusion or not.

As has already been said numerous times, there's no way to be sure that anybody (or anything) else has a similar experience. But something is definitely happening, at least for me! Just saying it can all be reduced to neurons doesn't really explain anything. I know how software can achieve quite "clever things". But that doesn't tell me if that program or the computer running it is having an experience that is in any way analogous to what I am feeling right now as I edit this last sentence.
 
If I didn't know any better, I'd say you're using a different set of criteria for people you agree with and people you don't...
My mistake, I omitted a smiley - I wasn't being entirely serious, more ironic. But on second thoughts, and having read Pixy's original comment that I had previously missed, it seems that he is being sceptical about your claim that 'qualia' means something, and has explained why he is sceptical. So it seems to me that the onus is still on you to evidence or provide a convincing rational argument for your claim :)
 
The only occasions I've ever witnessed this level of dishonesty and self-deception is in interactions with psychopaths I know. I'm not being facetious at all when I say that I think theres something seriously wrong with PixyMisa.

Seriously? you're implying he may be akin to psychopathic because he ignored a question and doesn't agree with you? :eye-poppi
 
Yes. That's all fine but I'm interested in what this hypothetical (possibly artificial) "neural network" is feeling and aware of itself. Most seem to agree that at some point (e.g. single neuron) it feels and is aware of absolutely nothing - in the same sense that a single electron presumably feels nothing, or even how a small volume of empty space "feels nothing" and is unaware of anything. But at some later stage when things get big enough and are correctly connected, "feeling" (of some kind, and for lack of a better word) appears. We may not be able to precisely detect this transition point externally (who knows what an ant may be feeling or whether it feels anything) but it must exist if the thing we call consciousness is only a result of that underlying network.


That is an architecture that no one knows well enough as of yet. First we have to define feeling before we can even know what we are dealing with. Feeling, at least from the perspective from which I view it, necessarily involves an entire network. Single neurons wouldn't get close since they would only be able to provide direct stimulus response action.
 
Hmm... I think that such an improvement would require a restructuring of the semantics we use with regard to both. A common opinion here is that the language relating the subjective to the objective should simply be at the level of neural net interactions. However, I strongly suspect that we're going to have to go deeper down the emergent scale to the level of elementary physics to get the job done.


OK, but I doubt that is necessary. As to the scale we must delve, there is no way to know until we get to it. To save time we should probably see if we might be able to arrive at definitions for each level -- neuron and elementary physics -- keeping the terminology separate.



Well, in previous discussions I used the analogy of consciousness being akin to a "light" operating in a mental software space, which in turn interfaces with the biological hardware. When the "light" passes thru certain mental elements it produces a spectra of differentiated experiences we refer to as "qualia". The 'light', like all entities capable of physical interaction, is energetic in nature and, therefore, has the capacity to inact physical change. The exertions, and directionality of those exertions, are what we colloquially refer to with words like "will" or "intent". I propose that higher order organization of this basic process is what gives rise to the varieties and gradations of experience we give names like "emotion", "sensation", "perception", "desire" etc..


OK, but that seems more like a description of an internal experience reflected upon. I'm not sure it makes sense of all the phenomena we encounter, but I'll leave it to you to flesh out the idea.




I suppose we could try doing that but we hit a bit of a road block when I earlier attempted to start mucking about with our basic semantic structure. If you're willing to bear with me this time I'd be more than happy to give it another go :)


Don't recall, but when mucking about with basic semantic structure you're heading into dangerous territory -- dangerous in the sense that you might lose objectivity in examining the ideas. As a species we are famous for fooling ourselves.
 
Seems like you'd have to have a "self" before it could be referenced? Let's say I write a small program that has some kind of internal model of itself (or perhaps just the physical container that it resides within?), and it has some minimal interactive input/output functionality, plus perhaps some kind of ability to learn and change it's behaviour (let's say in response to inputs of "good program!" and "bad program!"). Will that system be "aware"? Is it "conscious"?

The system needs to be able to differentiate between the subsets of the model that reference self vs. those that reference non-self, and display different behavior accordingly.

That is what self reference is. That is what self is.

Your deep concept of "self" is nothing more than a very complicated extension of the fact that your brain behaves differently when references to the body that it inhabits are involved.

Sure such a system is aware and conscious -- but of what? Is it aware of what it looks like? What it sounds like? Who it is?

Dodging the conundrum doesn't make it go away.

Dodging an imaginary ball thrown at your head doesn't make the imaginary ball go away, I agree.

Do you care if you are conscious or not? I personally would care very much to know if a particular extremely small network was conscious even if there was no reliable way to directly observe this externally. (Presumably I would have to be convinced by an extremely strong argument that it must be conscious.)

No, I don't care if I am conscious or not.

I am what I am. What if aliens came down and told me I wasn't conscious -- would I lose consciousness? Would my self-awareness evaporate? lol

I have no interest in whether or not a given neural network is conscious if that consciousness doesn't lead to anything observably interesting. I have interest in whether it can do things that I am interested in that I can observe.
 
rocketdodger said:
How one might formalize normative statements such as your quote?

Any statement "X should Y" carries an implicit " .... if X wants Z" with it.

Whether someone says it or not, it is there -- even if only in the head of the person who made the statement.

So for me to say "you should be very suspicious of people who are so obsessed with objective 'good'" is really saying " you should be very suspicious of people who are so obsessed with objective 'good' if you care about people disclosing their true intentions with you, and I assume you do," etc.

Already, we are well on the road to complete formalization. In the expanded context, such as that above, "you should" clearly means something like "the behavior with the highest probability of resulting in reaching your goal, all else being equal, in my opinion, is to ..."

So now we have :

"the behavior with the highest probability of resulting in reaching your goal, all else being equal, in my opinion, is to be very suspicious of people who are so obsessed with objective 'good' if you care about people disclosing their true intentions with you, and I assume you do"

Just to get you further along, I will formalize some of the other terms in there.

"in my opinion" can be formalized to "according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists .. ... because such people are often motivated by motives that are not immediately apparent..."

"if you care about" can be formalized to "if it would benefit you somehow, even in a purely psychological manner, for .. to ..."

So now we have:

"the behavior with the highest probability of resulting in reaching your goal, all else being equal, according to the conclusions reached by logical inference I have performed, possibly subconsciously, and/or insight I have had, the source of which is still not agreed upon by human philosophers/scientists, is to be very suspicious of people who are so obsessed with objective 'good' if it would benefit you somehow, even in a purely psychological manner, for people to disclose their true intentions with you, and I assume you do, because such people are often motivated by motives that are not immediately apparent"

Should I go on?

I never said it would be short and sweet. But it can be done.



Thanks for taking the time to do that.

What I have bolded in your quote describes what somebody might learn over time through practice in dealing intelligently with a large number of people.

It's not just self-referentiality there's quite a lot of coreferentiality involved I think it's safe to say.

How does an algorithm pack in what seems like the potentially infinite amount of info needed to make the formalization you laid out?

Thanks in advance for dealing with my questions.
 
My mistake, I omitted a smiley - I wasn't being entirely serious, more ironic. But on second thoughts, and having read Pixy's original comment that I had previously missed, it seems that he is being sceptical about your claim that 'qualia' means something, and has explained why he is sceptical. So it seems to me that the onus is still on you to evidence or provide a convincing rational argument for your claim :)

Qualia is just a term for what we all experience. PixyMisa, is hardly in a position to make an argument against their reality or definition, seeing as how he can't even answer a simple question about his own feelings.
 
Last edited:
AkuManiMani said:
The only occasions I've ever witnessed this level of dishonesty and self-deception is in interactions with psychopaths I know. I'm not being facetious at all when I say that I think theres something seriously wrong with PixyMisa.

Seriously? you're implying he may be akin to psychopathic because he ignored a question and doesn't agree with you? :eye-poppi

There are plenty of people here who had very strong disagreements with me and may not have responded to some questions here or there, but I don't question their psychological health. I'm saying that, judging from Pixy's overall response history he has no feelings that hes is aware of or willing to acknowledge. This indicates to me that he is seriously emotionally repressed or lacks the capacity altogether (as is the case with psychopathy). Right now, I'm inclined to suspect that the former is the more likely explanation but, in either case, I think he psychologically stunted.

Anyway, PixyMisa, I'm still wating for you to answer my question:

What are you feeling right now?
 
Last edited:
Hmm... I think that such an improvement would require a restructuring of the semantics we use with regard to both. A common opinion here is that the language relating the subjective to the objective should simply be at the level of neural net interactions. However, I strongly suspect that we're going to have to go deeper down the emergent scale to the level of elementary physics to get the job done.

OK, but I doubt that is necessary. As to the scale we must delve, there is no way to know until we get to it. To save time we should probably see if we might be able to arrive at definitions for each level -- neuron and elementary physics -- keeping the terminology separate.

I think I may have mentioned this before but, as far as our scientific knowledge of consciousness goes, all we have is a catalog of neural correlates but no physical explanation of what makes those particular systems conscious. However its accomplished, one can be sure that we must atleast have of way of integrating the elements of the subjective into our physical model(s) or, failing that, some objective way of identifying consciousness compatible systems.

Earlier in the thread I suggest to dlorde that what we call consciousness, to some extent or another, is a defining feature of living systems. I suppose a good physical indicator of a conscious system [or, at the very least, as system capable of such] would be if it were utilizing energy to push itself away from thermodynamic equilibrium in a self-sustaining manner instead of running down like inanimate systems.

It could also be that critters like us evolved nervous systems to support a greater degree of awareness and lucity than other tissue types are capable of. This would be reflected by the greater metabolic requirements of neural tissue.

I know that above proposals are very broad brush strokes but atleast they give some possible objective criteria for assessing if a physical system is conscious or capable of supporting it.

Well, in previous discussions I used the analogy of consciousness being akin to a "light" operating in a mental software space, which in turn interfaces with the biological hardware. When the "light" passes thru certain mental elements it produces a spectra of differentiated experiences we refer to as "qualia". The 'light', like all entities capable of physical interaction, is energetic in nature and, therefore, has the capacity to inact physical change. The exertions, and directionality of those exertions, are what we colloquially refer to with words like "will" or "intent". I propose that higher order organization of this basic process is what gives rise to the varieties and gradations of experience we give names like "emotion", "sensation", "perception", "desire" etc..

OK, but that seems more like a description of an internal experience reflected upon. I'm not sure it makes sense of all the phenomena we encounter, but I'll leave it to you to flesh out the idea.

Hm. Any particular phenomena you could point out to me so I give could attempt to frame it in the context of the above scheme? I'm sure there are a lot of significant details I may have overlooked.

I suppose we could try doing that but we hit a bit of a road block when I earlier attempted to start mucking about with our basic semantic structure. If you're willing to bear with me this time I'd be more than happy to give it another go :)

Don't recall, but when mucking about with basic semantic structure you're heading into dangerous territory -- dangerous in the sense that you might lose objectivity in examining the ideas. As a species we are famous for fooling ourselves.

As long as I'm getting honest external feedback from you it will be much harder to get too lost or disoriented :)
 
PixyMisa said:
No matter if we reduce them the reduced information is still assimilated via our consciousness/experience/perception.

The issue for me is not one of fundamentals/ontology/-isms its one of practicality.
Its more a question of motives than fundamentals.

If we do not examine our consciousness by self-knowledge/meta-cognition then our motives forever remain hidden and mostly a burden.
Actually, science is a powerful tool for examing what motivates us. More powerful than introspection, because it can learn things introspection cannot.

Sure a nuclear bomb is more powerful than a knife. What does that help when you need to clean a fish?

PixyMisa said:
Science ignores the hidden motives of the scientists.
Sure, because they're irrelevant.
Well here is your problem then.

PixyMisa said:
That is why a high level research scientist can still be a theist.
It's the method that matters. It's based on procedure, not belief. Science doesn't care what you believe; if you do it right, you get the same answer every time.
Ah so you have solved the problem of induction.
I will be waiting for the answer.

PixyMisa said:
Recently a fundie told me he has changed his approach to science since for him now science is an exercise in confirming the bible!!
People say weird things.
This is the exact problem, dismissing what people say as just weird is not only arrogant but unscientific.

PixyMisa said:
This is more and more common I am afraid.
Eh. He's talking nonsense. Nonsense has always been with us. It's no worse now than in years past, and probably somewhat better.
Ignorance is bliss....have a nice day.

PixyMisa said:
The whole computability ontology is the same thing just more acceptable because computers are not yet telling us what to do yet.
No.
Yes.

PixyMisa said:
And its these hidden motives that color our view of reality, for the sake of an ontology/-ism
No.
Yes

PixyMisa said:
Science works.
When did I say science does not work? Preaching scientism is boring, so drop it.

PixyMisa said:
If our motive is to reduce everything for the sake of reductionism instead of the sake of practicality then we end up missing the obvious - qualia for example.
There's no such thing.
Yes there is.

PixyMisa said:
Ignoring plant physiology and ecology has gotten many geneticists running after the reductionist DNA one way street dogma and ending up with total failures in the field.
Name three.
http://www.natureinstitute.org/nontarget/report_class.php

PixyMisa said:
Boasting that everything is physics is really saying nothing at all about anything except about yourself.
Boasting? What are you talking about?

Everything is physics. It's a fact. It's reality. Join us. It may not always be comfortable, but at least it's true.
You sound like a priest selling a religion.
 
Last edited:
First of all, that is not "what is consciousness?" but concerns a discussion of the contents of consciousness. That is a very different type of discussion.
Well there is the problem. Consciousness is not an abstraction it is a real phenomena with content and context. Any definition of consciousness requires an examination of its content and context. This is what a science of qualities is all about. It is when a description of content and context becomes the definition.

There is no science of qualities. This is all old philosophy dating from the advent of rationalism and discussion of properties. It was picked over repeatedly and eventually evolved into various forms of idealism where it has remained. It amounts to rearranging deck chairs on the Titanic.
Thanks for your opinion.
 
Earlier in the thread I suggest to dlorde that what we call consciousness, to some extent or another, is a defining feature of living systems. I suppose a good physical indicator of a conscious system [or, at the very least, as system capable of such] would be if it were utilizing energy to push itself away from thermodynamic equilibrium in a self-sustaining manner instead of running down like inanimate systems.

I think you touched on a critical point there.
All our observations tell us that consciousness only occurs within living systems.
The way living systems generate, preserve and store energy is key to re-creating consciousness.
We have a long way to go before we have mastered this essential requirement for consciousness.
I don't see AI researchers showing any interest in this as they assume energy just comes from a plug point.
 
cornsail said:
How would Hofstadter and Dennett operationalize it?
Self-referential information processing.
That's not an operational definition.

Every science experiment tests the working assumptions of science itself.

Not true. Science experiments test their hypotheses.

Science depends on methodological naturalism, which can only be reliable if metaphysical naturalism is correct.

Not really. Methodological naturalism is epistemology and basically just says useful study is study that can lead to predictive power.

Given those premises, the very concept of qualia is incoherent nonsense.

Because...?

If qualia are real, the entirety of modern science and engineering, all of human civilisation in fact, is just a series of accidents, uncounted quintillions of them all somehow working out the way you'd expect if science, rather than qualia, were real.

I don't see how that follows.
 
Status
Not open for further replies.

Back
Top Bottom