• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
EXACTLY.

Which is why it irritates me when people say 'but how can we know for sure' about things that all experience says we DO know for sure.

It is obviously both accurate and reasonable to claim that we do not know certain things. Painting statements like "we don't know for sure" as indicating a lack of intelligence with a broad brush is trivially inaccurate and unreasonable.

So, be specific please. What is it you are claiming to know for sure that people are denying is the case?
 
Not in every case, no.
Thank you. The specific examples I mentioned were an attempt to get to acknowledge that fact. One case is enough to cause that statement to be true. That we disagree on some of the specific cases is not terribly important to me, although I would appreciate hearing what cases give you reason to conclude we can't make that statement.
 
It is obviously both accurate and reasonable to claim that we do not know certain things. Painting statements like "we don't know for sure" as indicating a lack of intelligence with a broad brush is trivially inaccurate and unreasonable.

So, be specific please. What is it you are claiming to know for sure that people are denying is the case?


We do not treat all theories with equal respect and the reason concerns the type of evidence and the ability of a model to explain the available evidence.

The evidence for quantum effects on microtubules causing measurable changes in consciousness is extremely poor, not rising to the level at which we would give it legitimate consideration. And the evidence against the predictions it makes -- already presented in this thread -- largely removes it from the table.

So, that would be one example.
 
Q1. Do you believe a single "neuron" plus a finite number of suitable connections in any particular arrangement (to itself or just "waving in the wind" in this case?!) can produce consciousness to any degree at all under any circumstance?

This really depends on the definitions one is using. When many people here think of "consciousness" they imagine something like what we humans experience. But the human experience obviously requires many components such as a whole spectrum of perception, a body map, an emotional infrastructure, memory, etc. Obviously all those things cannot be had with a single neuron.

And even if your definition is something along the lines of what Pixy and I use -- simple self reference -- you run into the problem of "what is self reference?" There is perhaps some configuration that a single neuron could be put in such that it responds differently to events in the environment than it does to action potentials directed back at itself by some kind of a kooky self-synapse (although I am not sure such a thing even exists ) and you might be able to label this "self reference," but what good is that label in this case?

Obviously the single neuron doesn't really do anything we humans find interesting, and it obviously doesn't do anything it itself might find interesting (unless you reduce the definition of interesting as well), so in this case what is the proper approach? Should we call it conscious because it exhibits trivial self reference? Or should we say it isn't conscious, even though it exhibits trivial self reference, because although the self reference does satisfy our formal definition perhaps the definition should be more rigorous (and we are just too lazy to put the effort into changing the definition -- I know I am, at least )?

Conundrums like this are why Pixy and I prefer to stay away from the vague term "consciousness" and focus on the behavior of the system. What can a single neuron do? Clearly not much. Who cares if it is "conscious" or not?

Q2. Is it possible that adding a single neuron to such a network and additional connections (to or from that newly added neuron) now allows the expanded network to produce any consciousness at all?

Again it depends on the definition of "consciousness" being used, but I would say that there are certainly thresholds below which a certain observable behavior is simply not displayed.

At the very least there is no visual perception without some kind of light receptor neurons, no auditory perception without the right receptor neurons, etc.

Furthermore we can be pretty sure there won't be much of a memory -- at least as we experience memory -- without something resembling an associative network.

The list goes on for quite awhile.

And of course there is the question of whether or not the n +1th neuron allows the network to finally satisfy a given definition of self-reference. Maybe you could have gotten self reference with n neurons, but they weren't wired that way, and the n + 1th neuron finishes the "circuit."

But at a fundamental level there is the same problem as with the first question you asked -- the labels get in the way. It is much clearer to simply speak of what a network of n + 1 neurons might be able to do that a network of n neurons cannot.

Pixy and your position (if I understand it correctly) is that a suitably large network with appropriate connections can definitely produce consciousness (i.e. it's "mathematically proven") without requiring anything more than what what could be essentially be described as a conventional "artificial neural network". If this is correct then either there is "a little bit of consciousness" possible in even the smallest of such networks, or else it must suddenly "pop into existence" once the network has grown to a large enough configuration (and also with the appropriate connections/weights etc.)

Again, I would prefer to say that there are thresholds below which a given behavior is not exhibited.

But yes that is a decent summary of our position.

Q3. Is the lowest such number of nodes that allows consciousness (call it Nmin) greater than one but still finite?

Corrected to speak in terms of behavior and not "consciousness," I would say yes.

If that's correct then I'd like to hear your explanation of how adding one more neuron (plus connections) to an existing completely non-conscious network can now produce at least glimmer of self-awareness. What did adding that extra node (plus connections) do?

Hopefully I already explained that above.

But here are some examples:

Suppose the behavior in question is distinct memory recall of a certain number of events. We know from research on associative networks that there is a minimum number of nodes required for good convergence on a given "recall" state from a given initial state. That number increases if the recall states in question (the events in memory) are similar, and decreases if the states are very distinct. If you go below a certain number, the network is simply unable to converge (remember) reliably on one state or another -- it either fails to converge at all (rare) or else it converges to the wrong recall state from a given initial state.

Suppose the behavior in question is the differentiation of two objects using visual perception. To keep it simple, further suppose the behavior is simply the detection of superficial differences -- color, shape, whatever. We know from research on perceptrons that a minimum number of nodes is needed to reliably distinguish between any two inputs (assuming they are indeed different). At the very least, for example, to distinguish between a square and a circle there must be enough sample nodes to detect the portions of the square that are not present in the sample window when a circle is sampled instead. Of course there could be some more complex method, using a single sample node that is moved around, etc, but you get the idea.

As for self reference ... well, that is a little more complex, mainly because the idea of self reference is so fuzzy and by definition subjective (it is subjective in a trivial sense because after all it is the entity, or network in this case, that is referencing itself that is providing the definition in some way). But even here it is obvious (to me, at least) that if you define a behavior to be exhibited, there will be a minimum number of nodes for any network that can satisfy that behavior.

Fundamentally, self reference requires the distinction between self and non-self, so I would expect such a network (not withstanding the kooky self-wired thing I mentioned in the response to Q1) to have at least 2 nodes, and if I was the one designing it I would probably use many more than that.

But what kind of behavior would you get with only 2 nodes, even if the network was somehow self referential? Not much. So how many nodes would you need for a self referential network that was self referential in a visual sense? Well, you need all the perceptron filtering (or some other kind of input filtering, it just so happens that most of nature uses perceptron arrangements at the first level ) and then all the nodes required for actually registering the distinction between visual self and non-self, etc. It should be clear to you by now that to get anything resembling even what a mouse is capable of requires a great many nodes indeed.

Would also love to hear your opinion on the size (even roughly) of the smallest such network.

I think I covered that. In summary, my answer is that it really depends and furthermore I am not experienced enough to provide specifics.

A pipe-dream project of mine is to hook up a really easy to use artificial neural network development environment with a commercial game engine so people can tinker with getting game AI to work using ANN's as their brains instead of the finite state machines we use now.

Because I haven't done any work at all in that direction, my experience with neural networks is limited to what I learned in college and what I have read since then. I would love to sit down and play with a small network until I got it to exhibit what I might consider genuine self reference in the context of a game world.
 
Last edited:
Thank you. The specific examples I mentioned were an attempt to get to acknowledge that fact. One case is enough to cause that statement to be true. That we disagree on some of the specific cases is not terribly important to me, although I would appreciate hearing what cases give you reason to conclude we can't make that statement.
You don't quite grasp the problem.

There are no possible physical effects remaining that could change our view of how the brain works. It doesn't matter what you do; what is already known is already known; what is not known is bounded by what is known. There's nothing left for you; no possible rational basis for your objection. Nothing.
 
EXACTLY.

Which is why it irritates me when people say 'but how can we know for sure' about things that all experience says we DO know for sure. And us being skeptics have to agree, that well yes, there is a chance that we are wrong, and they take that tiny stupid chance and hold on to it as a reason to ignore the plain truth staring them in the face.

It irritates me when people poke and pick to try to find the tiniest problem with somebody elses idea just so they can be satisfied that we don't know the answer, instead of actually trying to come up with anything useful themselves.

This isn't directed at you btw, I don't even know what set me off, just a pet peeve.

Oki, I think I'm getting where you're comin' from. Hopefully, this old discussion will be able to progress beyond that point once some fresh fresh ideas and perspectives start being entertained. Right now it looks like most of the usual participants are just falling back on the same tired arguments. Lets work on changing that :o
 
What do you mean by 'qualia'?

The word 'qualia' refers to feelings, emotions, sensations, perceptions etc as -experienced by a subject-. Its a very simple concept to grasp. I can only assume that certain individuals are deliberately playing dumb to obstruct this conversation from progressing in directions they're not comfortable with.
 
You don't quite grasp the problem.

There are no possible physical effects remaining that could change our view of how the brain works. It doesn't matter what you do; what is already known is already known; what is not known is bounded by what is known. There's nothing left for you; no possible rational basis for your objection. Nothing.

You neglected to answer my request for an example. Let's review what was said:

Beth said:
I'm not sure this is true. In the natural world, just as many features are fractal in form and amenable to analysis via fractal maths, many features are mathematically chaotic in form and amenable to analysis via the maths of chaos theory. It is the chaotic behaviour of natural processes that gives rise to many of the patterns and self-organisation found in nature, e.g. spatio-temporal chaos in reaction-diffusion reactions.

We can't predict the exact forms that will result from the activities of such systems, but we can predict the kind of forms they will produce - they are amenable to mathematical analysis. The brain is a complex self-ordering structure and is known to have chaotic features in its functioning, and it seems reasonable to suppose that while its functioning may be unpredictable, it may be amenable to mathematical analysis, and it may be possible to emulate some aspects of its complex activities using such mathematical techniques.

Absolutely. I don't disagree with any of this. What I'm saying is that we do not, at this point, know enough to state that it definitely is or is not possible.
Not in every case, no.

I understand that we disagree regarding some examples I brought up that cause me to feel that we cannot state with certainty that it is or is not possible. What I would like to know is what are the cases that you cause you to conclude that we don't know if it is possible. Previous to this post, I was under the impression that you had no doubts about the possibility.
 
It is obviously both accurate and reasonable to claim that we do not know certain things. Painting statements like "we don't know for sure" as indicating a lack of intelligence with a broad brush is trivially inaccurate and unreasonable.

So, be specific please. What is it you are claiming to know for sure that people are denying is the case?

As for myself, it isn't really that I think it indicates a lack of intelligence, rather it indicates a worldview that I find unacceptable.

For Roger Penrose to seriously consider that there exists actual Platonic ideals of ethics and aesthetics embedded into the fabric of the universe is an insult to much of what I as a person stand for.

This view simply smacks of elitism and even racism. What, certain cultures have access to those ideals while others do not? Certain ethnicities? My microtuble quantum calculations are out of whack because I don't agree with the established ideals?

So for anyone to actually give validation to such a viewpoint -- even a teeny tiny bit -- is also a little insulting. I don't like that Beth even considers it possible. What does that say about Beth? That she thinks it is "possible" that Platonic ideals really exist? That is tantamount to saying it is "possible" that whites are superior to blacks. Eh, why not? It could be a Platonic ideal, right? Who is to say what the Platonic ideals are -- all we can do is go with it, right? We have no say, since the whole point of Platonic ideals is that they are objective, right?

Maybe Beth is only talking about the microtuble quantum calculations -- maybe she thinks that is possible, not the Platonic ideal part. But that isn't their hypothesis! The whole point of the quantum calculations is to access the Platonic ideals! What the hell would a neuron do with the power of a nondeterministic state machine, given via quantum superposition, if not to access these Platonic ideals? How on Earth would it affect anything if it didn't give us "insight" regarding those ideals?

I'm sorry, but in any form this pill is simply disagreeable. The whole hypothesis is not only bollocks from a physics and biology standpoint but even worse from an ethical standpoint. And if you try to accept only the potentially good parts of the hypothesis, you are left with something that makes even less sense.
 
You never actually left square one. And you won't, until you give up the notion that "qualia" is a meaningful term.

You never actually left square one. And you won't, until you recognize that "qualia" is a meaningful term for 'meaning'.
 
We do not treat all theories with equal respect and the reason concerns the type of evidence and the ability of a model to explain the available evidence.

Of course.

The evidence for quantum effects on microtubules causing measurable changes in consciousness is extremely poor, not rising to the level at which we would give it legitimate consideration.

I'm not really familiar with the evidence. "Legitimate" consideration is a matter of opinion, I guess.

And the evidence against the predictions it makes -- already presented in this thread -- largely removes it from the table.

Can you (or someone else) help direct me to what you're referencing? I skimmed the last couple pages and got lost. :(
 
The word 'qualia' refers to feelings, emotions, sensations, perceptions etc as -experienced by a subject-. Its a very simple concept to grasp. I can only assume that certain individuals are deliberately playing dumb to obstruct this conversation from progressing in directions they're not comfortable with.

But that isn't true.

Traditionally, "qualia" are the quality of that experience. "What it is like to see red" is, in the mind of HPC proponents, different from "seeing red."

Nobody disputes that there are feelings, emotions, sensations, perceptions, etc as experienced by a subject.

The dispute is whether there is something else there. The dispute is whether "what it is like to see red" is the same thing as "seeing red" or not.

If I walked down the street and asked people "what is it like to experience pain, above and beyond the experience of pain?" they would look at me like I was crazy. Unless they were philosophers that bought into the HPC, in which case they would babble incoherently using big words for awhile and then ask me "do you understand?"

No, sorry -- I don't understand. The experience of pain is the experience of pain.
 
Right. That is the problem with the concept of qualia. They are explicitly defined as being what is left over once what can actually be demonstrated to exist has been eliminated.

By definition, qualia don't exist.
 
As for myself, it isn't really that I think it indicates a lack of intelligence, rather it indicates a worldview that I find unacceptable.

You missed my point. Suggesting that statements like "we don't know" indicates a lack of intelligence or an unacceptable worldview as a general statement is trivially silly/wrong. In certain contexts it may be a fair assessment.

For Roger Penrose to seriously consider that there exists actual Platonic ideals of ethics and aesthetics embedded into the fabric of the universe is an insult to much of what I as a person stand for.

This view simply smacks of elitism and even racism. What, certain cultures have access to those ideals while others do not? Certain ethnicities? My microtuble quantum calculations are out of whack because I don't agree with the established ideals?

So for anyone to actually give validation to such a viewpoint -- even a teeny tiny bit -- is also a little insulting. I don't like that Beth even considers it possible. What does that say about Beth? That she thinks it is "possible" that Platonic ideals really exist? That is tantamount to saying it is "possible" that whites are superior to blacks. Eh, why not? It could be a Platonic ideal, right? Who is to say what the Platonic ideals are -- all we can do is go with it, right? We have no say, since the whole point of Platonic ideals is that they are objective, right?

Looks like you're injecting too much emotion into things. You finding an idea insulting based on its supposed implications says nothing about its accuracy. And inferring that Beth is suggesting whites are superior to blacks is simply a ridiculous stretch.

I'm sorry, but in any form this pill is simply disagreeable. The whole hypothesis is not only bollocks from a physics and biology standpoint but even worse from an ethical standpoint. And if you try to accept only the potentially good parts of the hypothesis, you are left with something that makes even less sense.

I'm not very familiar with the idea so I can't comment on its merit, but again, I don't see why you would bring ethics into this.
 
AkuManiMani said:
The word 'qualia' refers to feelings, emotions, sensations, perceptions etc as -experienced by a subject-. Its a very simple concept to grasp. I can only assume that certain individuals are deliberately playing dumb to obstruct this conversation from progressing in directions they're not comfortable with.

But that isn't true.

Traditionally, "qualia" are the quality of that experience. "What it is like to see red" is, in the mind of HPC proponents, different from "seeing red."

Nobody disputes that there are feelings, emotions, sensations, perceptions, etc as experienced by a subject.

The dispute is whether there is something else there. The dispute is whether "what it is like to see red" is the same thing as "seeing red" or not.

If I walked down the street and asked people "what is it like to experience pain, above and beyond the experience of pain?" they would look at me like I was crazy. Unless they were philosophers that bought into the HPC, in which case they would babble incoherently using big words for awhile and then ask me "do you understand?"

No, sorry -- I don't understand. The experience of pain is the experience of pain.

In past discussions I've clearly and directly stated the definition of qualia -- just as I have now -- and certain individuals [you know who you are] continued to insist that it is ill defined nonsensical concept. Once again I'll present dictionary definitions of the word:

................................................................................

Dictionary.com:

qua·le   
[kwah-lee, -ley, kwey-lee]
noun, plural -li·a  
[-lee-uh]
. Philosophy .
1. a quality, as bitterness, regarded as an independent object.
2. a sense-datum or feeling having a distinctive quality.
Origin:
1665–75; < L quāle, neut. sing. of quālis of what sort


................................................

Wiktionary.com:

Noun
qualia
plural form of quale Properties not define-able with numbers; a quality.

Antonyms
quanta


................................................................................

Again, the concept is elementary -- in more ways that one -- yet here we have a gaggle of posters pretending as if its somehow 'incoherent' or 'irrelevant' to the discussion of consciousness when the concept itself refers to the very basis of consciousness. Any theory of consciousness that fails to address or meaningfully integrate the concept of qualia is not a theory of consciousness at all. Period.
 
Last edited:
As for myself, it isn't really that I think it indicates a lack of intelligence, rather it indicates a worldview that I find unacceptable.

For Roger Penrose to seriously consider that there exists actual Platonic ideals of ethics and aesthetics embedded into the fabric of the universe is an insult to much of what I as a person stand for.

This view simply smacks of elitism and even racism.
Wow! The connection between those statements is tenuous to say the least. And to take insult that someone else considers platonic ideals a metaphysical possibility seems completely irrational to me. Fairly common though, now that I think about it.

So for anyone to actually give validation to such a viewpoint -- even a teeny tiny bit -- is also a little insulting. I don't like that Beth even considers it possible. What does that say about Beth? That she thinks it is "possible" that Platonic ideals really exist? That is tantamount to saying it is "possible" that whites are superior to blacks.
First of all, I don't make the connection of platonic ideals = racism.

Second of all, what is it so awful about considering the possibility that whites are superior to blacks? Is it just as evil to consider it possible that blacks are superior to whites? The 'reality' of those assessments depends on how one defines 'superior'. If one defines superior based on the color of skin, then albinos would be either the 'best' or 'worst' sort of people depending on how one orients the scale. IMO, color of skin is a pretty poor definition of 'superior' regardless of the orientation of the scale.

My own opinion is that the best attitude to take, both personally and as social policy is the clearly false ideal that 'all people are created equal'. Would that statement be platonic ideal? I don't know. Outside of numbers, I have no real idea what a platonic ideal would be. But accusing Penrose of racism because of his belief in platonic ideals seems pretty far fetched to me.
 
Right. That is the problem with the concept of qualia. They are explicitly defined as being what is left over once what can actually be demonstrated to exist has been eliminated.

By definition, qualia don't exist.

Ya stupid prick, you're experiencing them right now....If you're not then you're not conscious and, therefore, not a person.
 
Last edited:
Can you (or someone else) help direct me to what you're referencing? I skimmed the last couple pages and got lost. :(


Sure, I'm referring to the wiki page about Hameroff's proposal from post 3544. RD originally referred to it in post 3531.


His proposal has been more thoroughly debunked by Michael Shermer and several others at various times including one of Shermer Skeptic articles a few years ago, but it would take time for me to locate that.
 
The word 'qualia' refers to feelings, emotions, sensations, perceptions etc as -experienced by a subject-. Its a very simple concept to grasp. I can only assume that certain individuals are deliberately playing dumb to obstruct this conversation from progressing in directions they're not comfortable with.


What is a feeling, emotion, sensation, perception and experience?
 
Status
Not open for further replies.

Back
Top Bottom