• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
... Keep in mind that the definition of information that RD provided does not concern itself with meaning. Meaning is something that arises through a different process than something merely being information. It has to be a certain type of information used for a purpose, which is why I think it arises in living organisms; the original purpose being survival.

Nice definition of "meaning", wasp: usefulness (information's meaningful if it enhances survival; the 'emotional' component of meaning). Of course, meaning is a slippery term, too. One often speaks of meaning in terms of understanding, where the "meaning" of information is simply the inference that should be drawn: in the A (action) --> B (intermediary) --> C (reaction) model, the "meaning" of B is "A": that A is present / has happened.

... How is a single substance out the window if maths are invented? Math is just a way of describing relationships. And, yes, mathematical realism is a form of dualism. But none of this argument concerns mathematical realism. That is much closer to Beth's position.

Simply put, math describes all possible orders. Existing implies we exhibit order. Science is about finding out which.
 
Last edited:
There are two aspects to consciousness - the behaviour, which we associate with being conscious, and the state we each experience individually. We assume that people who have the same physical processes and behaviour as us share consciousness. It's an imperfect assumption, but it's the best we can do.

When something doesn't have the same physical structure as us, but exhibits some of the behaviours associated with consciousness, we tend to assume it isn't conscious. Thus while we assume that while an actor is conscious, the representation/simulation of him on a DVD is not conscious. This assumption is so basic it hardly seems necessary to even mention it. The idea that the people on your TV are in any sense real is too absurd to even consider.

But the idea seems to be that if we can produce something that is close enough to behaving like a person, then it will be definitely conscious - in fact, to even question it shows not skepticism, but credulity. So the challenge for the developers of artificial people is not to understand and recreate a phenomenon - it's simply to do their best to fool us. Trick us into believing that we are interacting with a conscious person in some way - and then we must be!

How should we determine whether someone is conscious? We should understand the phenomenon before trying to measure it.

If consciousness does not indicate itself under any test of behavior (including all manner of expression), then it does not exist as a psychological phenomenon. To say otherwise is to essentially say that god also exists because people "personally experience him." Actually, the god bit would have more credence in a way, since that actually DOES produce a change in behavior. All you've done is setup an arbitrary test of what is "conscious" that allows you to dismiss anything you don't like without sound reason.

Your movie comparison is NOT a good one. The behavior of a character on television cannot be tested even in theory.
 
Nice definition of "meaning", wasp: usefulness (information's meaningful if it enhances survival; the 'emotional' component of meaning). Of course, meaning is a slippery term, too. One often speaks of meaning in terms of understanding, where the "meaning" of information is simply the inference that should be drawn: in the A (action) --> B (intermediary) --> C (reaction) model, the "meaning" of B is "A": that A is present / has happened.



Simply put, math describes all possible orders. Existing implies we exhibit order. Science is about finding out which.


Thanks. Yes, I have been using meaning in both ways -- in its simplest form as A being present, but also, in terms of some form of "intent" or usefulness based in survival.

As you know all of these words have multiple meanings that share a family resemblance, so discussions about them are always fraught with 'issues'.
 
On simulations, duplicating behavior, and functional equivalence.

Consider two sewage treatment systems. In both of them you have input in terms of dirty water and output in terms of clean water. This is the critical characteristic and function of a sewage treat system. You can take out any other part of the description or function of a sewage treatment system, but as long as it continues to accept bad water and output good water, then it works. Well, we also assume it is it isn't just dumping the bad water somewhere untreated (we'd count that as an output of bad water for the intents of this description). It doesn't matter if it uses chemicals, nanotechnology, electrolysis and fuel cells, plants, or whatever.

The brain is no different. It has a function in the body, and as long as a given system takes in the same inputs and puts out the same outputs, it is functionally equivalent. It doesn't matter if it runs on electricity, blood, gears, a committee (though good luck making one that can respond quickly enough), or whatever. If the behavior is not distinguishable from entities we claim to know are "conscious" then claiming it is missing some sort of "spark" is a silly appeal to the supernatural.

It is true that a computer simulation of water cannot be functionally equivalent to water. The brain is different in this regard, as there is no reason to think a simulation of the brain could not be functionally equivalent to the brain (you just have to attach the inputs and outputs, which aren't that much different from electrical wires).
 
Last edited:
A squirrel is probably conscious because its brain is likely doing the same physical stuff that makes us conscious, too.

So, what about your conscious toaster? How is that designed?

Well, you look at the difference between a human and a squirrel, and then see what more you could remove and have the thing still be conscious. Maybe you could whittle it down to something like a very expensive toaster.

Or are you going to arbitrarily label a squirrel as the least complex creature that is conscious?

Hmm?
 
The problem here is, we don't yet have any way of knowing whether such behavior could result from brain activity that doesn't involve consciousness.

I didn't say consciousness.

I said self recognition. Self reference.

And, yes, we do know if such behavior can result from something else. It cannot. Self reference can only result from self reference.
 
If consciousness does not indicate itself under any test of behavior (including all manner of expression), then it does not exist as a psychological phenomenon. To say otherwise is to essentially say that god also exists because people "personally experience him." Actually, the god bit would have more credence in a way, since that actually DOES produce a change in behavior. All you've done is setup an arbitrary test of what is "conscious" that allows you to dismiss anything you don't like without sound reason.

Your movie comparison is NOT a good one. The behavior of a character on television cannot be tested even in theory.

A brain-in-a-vat would not be able to indicate that it's conscious. That does not entail that it's unconscious.
 
OK, but not sure exactly how. I can see that in terms of the Darwinian universes idea, but the ultimate stuff that exists just exists. I don't think we can ever arrive at any better explanation than that.
Agreed.

OK, but I don't see any reason to think in terms of intention structured into the universe. It's a natural consequence of one way of the universe unfolding with natural selection, I think.
Or one can look around and see both intention and natural selection.

It might be the case, though, that intent is part of the fabric of reality.
I like that phrase .... :)

I don't think we have any way of knowing and I have no need of that hypothesis.
Your choice, so to speak.

I think we do have ways of knowing even though, so far, they are not objective.
 
Agreed.


Or one can look around and see both intention and natural selection.


I like that phrase .... :)


Your choice, so to speak.

I think we do have ways of knowing even though, so far, they are not objective.


Fair enough. You may be right; I have no way to know either way.
 
Nice definition of "meaning", wasp: usefulness (information's meaningful if it enhances survival; the 'emotional' component of meaning). Of course, meaning is a slippery term, too. One often speaks of meaning in terms of understanding, where the "meaning" of information is simply the inference that should be drawn: in the A (action) --> B (intermediary) --> C (reaction) model, the "meaning" of B is "A": that A is present / has happened.
Yes, it seems to me that, in the sense we are discussing, data (the raw stimulus or input) becomes information and so has meaning when it has been interpreted (context-dependence) and found to be relevant. So for a hypothetical single cell organism, a stimulus to a contact sensor in the context of feeding might trigger an 'engulf food particle' response, and in the context of navigation, might trigger an avoidance response. The context evaluation leading to the response might be just a dependence on chemical gradients, with the response depending on the current level of a particular chemical, but we might say the stimulus data is interpreted (i.e. given meaning) as a contact via the sensor, thus becoming information, and this contact information is then evaluated or interpreted according to the current context, and the appropriate action is taken. With a simple organism, where activity is on the level of reflex stimulus-response, with negligible intermediate processing, it becomes apparent that our description in terms of information, meaning, and interpretation is somewhat anthropomorphic - i.e. it relates to how we interpret what is happening, and involves generalisations that we apply to the behaviour of complex (especially living) things.

If we built a simple mechanical (or electrical/electronic) model that performed two different actions in response to the same input, depending on some internal state, would we use the same kind of language (e.g. 'information', 'interpretation' and 'meaning') to describe its behaviour? If not, why not?

Data is not information. Information is not knowledge. Knowledge is not wisdom. Wisdom is not truth. Truth is not beauty. Beauty is not love. Love is not music. Music is best...
Zappa et al
 
Last edited:
What we might term God's Laws (math & physics provides our best understanding) provides both ways and means for the lowest intentful bits to combine and recombine.

Once the needed environment and rna/dna and cell structure was reached evolution kicked in and here we are. Intentful stuff combining into more refined intentful stuff.

Intentful how?

Intent implies purpose, or planning, which suggests some level of decision making... or are you saying that we can interpret evolutionary development as intentful - i.e. feature A evolved in order to provide advantage B ?
 
Yes, it seems to me that, in the sense we are discussing, data (the raw stimulus or input) becomes information and so has meaning when it has been interpreted (context-dependence) and found to be relevant.

There's the 'objective', ideal inference (knowledge of the actual cause), plus the 'subjective' interpretation (in the context of the internal state of the organism), and behavior based on that interpretation.

So for a hypothetical single cell organism, a stimulus to a contact sensor in the context of feeding might trigger an 'engulf food particle' response, and in the context of navigation, might trigger an avoidance response. The context evaluation leading to the response might be just a dependence on chemical gradients, with the response depending on the current level of a particular chemical, but we might say the stimulus data is interpreted (i.e. given meaning) as a contact via the sensor, thus becoming information, and this contact information is then evaluated or interpreted according to the current context, and the appropriate action is taken. With a simple organism, where activity is on the level of reflex stimulus-response, with negligible intermediate processing, it becomes apparent that our description in terms of information, meaning, and interpretation is somewhat anthropomorphic - i.e. it relates to how we interpret what is happening, and involves generalisations that we apply to the behaviour of complex (especially living) things.

Yes, great description. And necessarily anthropomorphic, I think, since we're the ones trying to understand what's going on; and really not all that different from human notions of information (this post, for example, is the intermediary for my ideas, which you interpret in your context: if my ideas are coherent and well-expressed by the post, you get the idea I intend from these intermediary symbols, and there is a useful, at least for our discussion, information exchange; not so different from the cell interpreting contact stimuli [hopefully you choose to absorb my post rather than flee it!]) :p

If we built a simple mechanical (or electrical/electronic) model that performed two different actions in response to the same input, depending on some internal state, would we use the same kind of language (e.g. 'information', 'interpretation' and 'meaning') to describe its behaviour? If not, why not?

Don't see any reason why not. We might be more reluctant to anthropomorphize machines, I suppose. But so long as we stay alert to any and all differences between the biological cell and mechanical cell, it only makes sense to point out the similarities as well.
 
Last edited:
If we built a simple mechanical (or electrical/electronic) model that performed two different actions in response to the same input, depending on some internal state, would we use the same kind of language (e.g. 'information', 'interpretation' and 'meaning') to describe its behaviour? If not, why not?
Information, interpretation, and meaning would exist for the builder. Where do you suggest it exists for the model itself? I see none therein.

Intentful how?

Intent implies purpose, or planning, which suggests some level of decision making...

I may go with "intent is just part of the fabric of space-time". Thanks, wasp. :D

or are you saying that we can interpret evolutionary development as intentful - i.e. feature A evolved in order to provide advantage B ?
I would not. One can examine the thrust of evolution and see ever increasing complexity, punctuated by massive die-offs and do-overs, each in a new environmental suite.
 
To the extent that information processing is a behavior, but no further.

Ok, let's stop here for a moment.

I wake up, my brain uses resources to crank up some process which results in my being able to have an experience of being aware of myself and my surroundings.

My waking up is evidence that my brain is engaging in some sort of behavior it wasn't engaging in before, because I'm experiencing the effects.

We live in a physical universe of matter and energy (and nothing else, as far as we know), the brain is a physical object, so as far as I can see, there must be a physical process going on here.

You say no, that "information processing" is actually what's responsible, although "information" is an abstraction, not matter or energy.

So somehow this organ in my cranium is "processing" an abstraction, and this results in changes which make things happen when I wake up which were not happening previously.

In order for me to take this claim seriously, I'm going to have to ask you explain how that works.

Seriously.

Tell me how this "information processing" gins up my ability to be conscious when I wake up.

Until and unless you can do that, I really have no reason to give any credence to your claim, which is nonsensical on its face.
 
No. It just has to perform an equivalent set of actions, period.

After you answer my previous question, perhaps you can also explain how it's possible for a machine to perform actions outside of 4-D spacetime.
 
But that is just an issue of interpretation of the output; I might misinterpret what you say, but you still said it; and I might not be able to understand directions in Mandarin but they are still directions.

If I say something to you, then it's true that I spoke -- that is, I made noise with my mouth -- even if you can't understand me for whatever reason.

On the other hand, when you look at the screen, if you can't interpret the readout, then it's not true that "addition" has taken place. All that's true is that the screen lit up.

It's just like the abacus. In objective physical reality, all that happens is the someone moves beads. The addition is entirely in that person's imaginations. Digital calculators are no different.
 
Imagine a simple organism that has a touch receptor and a means of movement with the touch receptor directly linked to movement. In a hostile environment, where the organism survives only through getting away from touch, touch means death is on the way; so it has its touch receptor linked to movement so that it can move away from whatever touched it. In a non-hostile environment, touch may mean sex and so the touch receptor is linked to movement in such a way that the organism moves closer to whatever touches it.

You're jumping the gun.

If you've got a very simple organism like that, it really makes no sense to talk about what the touch "means". It's just that the thing has evolved to react in certain ways.

And if you take physical science seriously, our brains are doing only that, too... but in an enormously complex way.

It's all physio-energetic action and reaction, nothing more.

Of course, it's so complex (and, so far, unamenable to direct research) that we're forced to describe situations in other terms.

I saw the STOP sign, I understood what it meant, so I stopped the car.

But that's a shortcut for whatever is really going on, on a purely physical level -- which is the only real level, as far as we know.
 
On simulations, duplicating behavior, and functional equivalence.

Consider two sewage treatment systems. In both of them you have input in terms of dirty water and output in terms of clean water. This is the critical characteristic and function of a sewage treat system. You can take out any other part of the description or function of a sewage treatment system, but as long as it continues to accept bad water and output good water, then it works. Well, we also assume it is it isn't just dumping the bad water somewhere untreated (we'd count that as an output of bad water for the intents of this description). It doesn't matter if it uses chemicals, nanotechnology, electrolysis and fuel cells, plants, or whatever.

The brain is no different. It has a function in the body, and as long as a given system takes in the same inputs and puts out the same outputs, it is functionally equivalent. It doesn't matter if it runs on electricity, blood, gears, a committee (though good luck making one that can respond quickly enough), or whatever. If the behavior is not distinguishable from entities we claim to know are "conscious" then claiming it is missing some sort of "spark" is a silly appeal to the supernatural.

It is true that a computer simulation of water cannot be functionally equivalent to water. The brain is different in this regard, as there is no reason to think a simulation of the brain could not be functionally equivalent to the brain (you just have to attach the inputs and outputs, which aren't that much different from electrical wires).

Where do you get that last sentence from?

You simply leap to it. It's a non-sequitur.

Yes, if you replace a brain with some physical thing that carries out the same tasks, well, you've replaced it with an artifical brain.

However, one cannot replace any physical object with a digital simulation of that object, because digital simulations are abstractions, not real things.

If you want to claim the brain is an exception, you have to explain how this could possibly be true; you can't simply assert it.

If you want something that's going to physically accept the electro-chemical inputs and to physically output the necessary electro-physical stuff that will keep the chain reaction going outward toward the rest of the body, you need a physical apparatus, not a simulation of such an apparatus.

So, what will this artifical brain look like? How will it work?

We can't say with any level of detail because we don't know how the brain does most of what it does.

But we do know that the brain does not run simulations, so we can be confident that it can't be replaced with a machine that runs simulations. It has to be an object that does brainy kinds of stuff, physically.

It can't be an object that does other sorts of stuff physically.
 
Status
Not open for further replies.

Back
Top Bottom