• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
What's the prevailing wisdom on whether human consciousness is somewhat (perhaps even completely) "learned" versus being "wired in" via genetics/evolution?

Consider a newly born baby. What kind of consciousness might it be experiencing? Presumably it has essentially no idea of the meaning of language, or what to make of the input arriving from eyes and ears except in a rudimentary way. Some input has been arriving while it was developing in the womb so those systems have been exercised to some degree but I'm not sure I can imagine what "meaning" it might consciously attribute to any particular inputs at that point.

I'd be interested in knowing more about the experiences of people who may have been completely blind or deaf "from conception onwards" (so far as that makes sense) and then regained those senses at some much later stage of life when they were able to communicate reasonably clearly what that experience was like for them.

If a human embryo developed in such as way that none of usual senses were functioning, would we still expect consciousness to be present later (assuming the body as a whole still continued to grow and function "normally", insofar as that was possible)?

Language is not necessary for consciousness, and consciousness is not learned.

Learning can happen in ways that do or do not involve the parts of the brain involved in generating conscious experience.

But a baby is wired to understand a very great deal of what's thrown at it. It is nowhere near a tabula rasa.
 
The brain is a mass of physical stuff. It acts just like any other mass of physical stuff.

It's a rather specific mass of physical stuff, but ok...

One of the things it does is perform a behavior we call consciousness.
To the extent that information processing is a behavior, but no further.


If we want to build a machine that does the same thing, it will have to perform an equivalent set of actions in 4-D spacetime in order to achieve that result.
No. It just has to perform an equivalent set of actions, period.

On the other hand, a machine that runs simulations is a machine that runs simulations. If that's what it's built to do, that's what it does, regardless of the real-world behaviors of the systems which it symbolically represents in simulations.
And those simulations process information, an advanced degree of which is all we need for consciousness.


If you want a machine that does what the brain does, including varying the synaptic connections, then you have to build a machine that actually does that, not a machine which simulates a system which does that.
True, but if what I want is consciousness, and if I don't care about flesh-and-blood brain matter, then simulated synaptic connections will do just fine.

Your claim that the brain is an "organic computer" is unfounded.
Is not.

And on a further note, if you're actually tired of the merry-go-round, as you claimed, you need to start actually engaging my arguments instead of repeating your positions.
 
Last edited:
You're moving the goalposts. The original exchange was:

I'm not an expert on consciousness, I suspect neither are you. Like the Supreme Court and Pornography, I know it when I see it. I'm not "moving goal posts", rather I am refining my answer so that the test actually makes bloody sense...or more bloody sense. I'm not pretending it is 100% perfect, but I think we can both agree that consciousness is a characteristic that in theory should be something we can deduce by observing behavior.

Or do you think that there could be a non-conscious version of you that acts just like you in all regards and has behavior that is completely indistinguishable from yours? If you don't think that, then you agree with my position, and I think we can both agree that the precise nature of a test is rather hard to state explicitly, probably in part because consciousness can be a little hard to state explicitly.

The bolded part is key here.
 
I said "no" probably 60 or 70 pages ago.

It is obviously the correct answer.

As this thread has continued for more than I am willing to devote to catch up on,
can someone please summarize to me the current points of contention here?
 
Better?

I don't know what that means.

They're certainly different.

A good simulation is an excellent way to predict the behaviour of a system. Of course, it's necessary that the system behave in a predictable way, and that the rules by which it operates are well understood. For most purposes, a simulation is far more flexible and usable than a model.

Naturally framing the argument in terms of "Models good, simulations bad" is a handy strawman to add to the stack.
 
No, which is exactly my point.

Because the calculator doesn't actually add two and two to get four.

Simulations are abstractions and require interpretation.

If you don't understand the symbol system, then you have no idea that the calculator is supposed to have "added" two and two to get four. All you have is an array of lights or a pattern of ink on paper.


But that is just an issue of interpretation of the output; I might misinterpret what you say, but you still said it; and I might not be able to understand directions in Mandarin but they are still directions. The computer still does the operation of adding 2 and 2 to equal four. It still follows the rules that comprise that action whether or not anyone interprets it properly. No one can use it if they can't interpret it, but the constrained action actually occurs.

This differs from two rocks falling because there could have been any number of reasons why the rocks fell. Someone has to impose the idea of addition on that situation to see addition in falling rocks, even in theory. The same is not the case with a calculator where addition is already defined for the system. Meaning is imparted into its construction because the movements of electrons is constrained in a particular way for a particular purpose.
 
Last edited:
I'm not an expert on consciousness, I suspect neither are you. Like the Supreme Court and Pornography, I know it when I see it. I'm not "moving goal posts", rather I am refining my answer so that the test actually makes bloody sense...or more bloody sense. I'm not pretending it is 100% perfect, but I think we can both agree that consciousness is a characteristic that in theory should be something we can deduce by observing behavior.

Or do you think that there could be a non-conscious version of you that acts just like you in all regards and has behavior that is completely indistinguishable from yours? If you don't think that, then you agree with my position, and I think we can both agree that the precise nature of a test is rather hard to state explicitly, probably in part because consciousness can be a little hard to state explicitly.

The bolded part is key here.

There are two aspects to consciousness - the behaviour, which we associate with being conscious, and the state we each experience individually. We assume that people who have the same physical processes and behaviour as us share consciousness. It's an imperfect assumption, but it's the best we can do.

When something doesn't have the same physical structure as us, but exhibits some of the behaviours associated with consciousness, we tend to assume it isn't conscious. Thus while we assume that while an actor is conscious, the representation/simulation of him on a DVD is not conscious. This assumption is so basic it hardly seems necessary to even mention it. The idea that the people on your TV are in any sense real is too absurd to even consider.

But the idea seems to be that if we can produce something that is close enough to behaving like a person, then it will be definitely conscious - in fact, to even question it shows not skepticism, but credulity. So the challenge for the developers of artificial people is not to understand and recreate a phenomenon - it's simply to do their best to fool us. Trick us into believing that we are interacting with a conscious person in some way - and then we must be!

How should we determine whether someone is conscious? We should understand the phenomenon before trying to measure it.
 
This differs from two rocks falling because there could have been any number of reasons why the rocks fell. Someone has to impose the idea of addition on that situation to see addition in falling rocks, even in theory. The same is not the case with a calculator where addition is already defined for the system. Meaning is imparted into its construction because the movements of electrons is constrained in a particular way for a particular purpose.

Meaning is imparted into the system by the intentions of human beings. That is the only means by which any physical system can have meaning.

The falling rocks can be a means of addition if a conscious being uses them for such a purpose. And if someone throws a calculator across the room and the keys hit the wall, it's not performing a calculation because there's nobody to interpret it.
 
Meaning is imparted into the system by the intentions of human beings.


If by system you mean calculator or computer, then yes.


That is the only means by which any physical system can have meaning.

No.

The falling rocks can be a means of addition if a conscious being uses them for such a purpose.

Yes.

And if someone throws a calculator across the room and the keys hit the wall, it's not performing a calculation because there's nobody to interpret it.

It is performing a calculation no matter what because that is what is designed to do. It is not performing a useful calculation because the numbers were randomly put into the system. I can do the same thing by randomly punching numbers and function keys. There will be an output and that output will occur because the machine adds -- it has defined functions that perform that action. The output, however, is meaningless because the input was meaningless.

I might look at the number after the fact and impose some meaning on it and think, OMG what numbers did I add together there, that's the solution to Fermat's Last Theorem (I know, mischaracterization, but you get the point); but that is a separate meaning that I impose on a sum that was generated by introduction of random numbers.
 
Meaning is imparted into the system by the intentions of human beings. That is the only means by which any physical system can have meaning.


I guess I should explain my answer "No". I'm sorry but I'm going to begin with Pixy's charge -- if you really believe that, then you are a dualist.

We are physical systems. Our brain is a physical system. If the physical system of our brain can have meaning only because of the intention of a human being, then intention cannot arise from our brain since the brain is a physical system and physical systems require imposition of meaning from outside by your argument.

We've already seen a good definition of information and restrictions on information that account for the creation of meaning in this thread. A physical system can create/impose meaning.
 
A physical system can create/impose meaning.

And we've been over this many times - but have yet to come up with a physical theory of how physical systems can have meaning.

The idea that something designed by a conscious being can have inherent meaning is at least a start, but I don't see that the meaning can be separated from the being that designed and created it.

If this means denying that anything can have meaning, then so be it. That's not something I agree with, but at least it's consistent.
 
And we've been over this many times - but have yet to come up with a physical theory of how physical systems can have meaning.

The idea that something designed by a conscious being can have inherent meaning is at least a start, but I don't see that the meaning can be separated from the being that designed and created it.

If this means denying that anything can have meaning, then so be it. That's not something I agree with, but at least it's consistent.


I've mentioned how it is possible in regards to receptor function several times, so I will mention it again using Blobru's excellent improvement......


Information is defined as: a change in system C through the action of intermediary B because of a change in system A. By this definition, virtually every action in the universe is information.

Information may be further defined/refined base on its specificity. If the change in system A only (no other possible cause of a change in C by means of B is possible) may cause the change in system C, then the information is specific. If many different causes for the change in system C are possible, then the information is non-specific and essentially amounts to random chance or noise. But this is not an absolute distinction; rather, there is a continuum based on the number of possible causes for the change in system C (the number of possible system As that can produce a given change in C).
The nervous system places constraints on information so that it is more specific, and that is how we create meaning.

For instance, if we look at the world at large we see a blooming, buzzing mess with information everywhere. But not all information is meaningful to us. We begin to impose meaning by restricting the types of information in the blooming buzzing mess that are important. This process begins at the level of sensory receptors. Somatosensory receptors will respond to a very small number of physical stimuli to produce a particular type of response – an action potential. That change in system C, the action potential, is defined by the way the nervous system is constructed; there are only two types of basic response, action potential or no action potential. Different aspects of the original stimulus, however, may be transmitted in the change in system C (the neuron) since the duration of the stimulus is coded by how long the train of action potentials continues and the intensity of the stimulus is coded in the frequency of action potential firing. Location on the body is maintained throughout, so location information also carries through.

Since not just anything activates a somatosensory receptor the information it transmits is relatively specific. This specificity provides the origin of meaning. Activation of a somatosensory receptor means something; we are designed in such a way that it means that something is touching us. No observer had to impose this meaning; it arose through the process of random mutation and natural selection.

Receptor function is not completely specific because there are other possible stimuli that can cause the neuron to fire other than touch (dying cell, etc.) – and it is this fact, further along in the processing chain, that permits the possibility of hallucination.
 
I guess I should explain my answer "No". I'm sorry but I'm going to begin with Pixy's charge -- if you really believe that, then you are a dualist.

We are physical systems. Our brain is a physical system. If the physical system of our brain can have meaning only because of the intention of a human being, then intention cannot arise from our brain since the brain is a physical system and physical systems require imposition of meaning from outside by your argument.
I'd say the point is more that we know certain (for sake of argument, physical) systems are conscious. So far all are lifeforms, with a biological substrate.

Pixy et al not only think the "correct" computer code can run on any substrate and will (I'd say "magically) become conscious, which includes running that code with pencil and paper. Where and how consciousness can possible exist in that last scenario beggars belief; substrate does seem to play some part.

We've already seen a good definition of information and restrictions on information that account for the creation of meaning in this thread.
I disagree. No one has come even close to defining information any better than they've defined consciousness.

Some entity capable on its' own of defining meaning is needed, and computers are not in that category; we provide the meaning component of anything computed.


A physical system can create/impose meaning.
Yes. It's named "life". Or you may agree with RD who states a single-cell simulation is alive; I and I suspect most don't.
 
I disagree. No one has come even close to defining information any better than they've defined consciousness.

What is wrong with the definition of information that RD offered?

Some entity capable on its' own of defining meaning is needed, and computers are not in that category; we provide the meaning component of anything computed.

No, if we are talking monism, then an entity defining meaning itself (lifting itself up by the bootstraps) is not possible. That already implies dualism, as I explained above. Meaning must arise from interactions amongst the single substance that is, if monism can make sense.
 
No, if we are talking monism, then an entity defining meaning itself (lifting itself up by the bootstraps) is not possible. That already implies dualism, as I explained above. Meaning must arise from interactions amongst the single substance that is, if monism can make sense.
That is one of the strongest suggestions that classical materialism -- that is the ur substance is inert -- is wrong.
 
I'm not an expert on consciousness, I suspect neither are you. Like the Supreme Court and Pornography, I know it when I see it. I'm not "moving goal posts", rather I am refining my answer so that the test actually makes bloody sense...or more bloody sense. I'm not pretending it is 100% perfect, but I think we can both agree that consciousness is a characteristic that in theory should be something we can deduce by observing behavior.

Or do you think that there could be a non-conscious version of you that acts just like you in all regards and has behavior that is completely indistinguishable from yours? If you don't think that, then you agree with my position, and I think we can both agree that the precise nature of a test is rather hard to state explicitly, probably in part because consciousness can be a little hard to state explicitly.

The bolded part is key here.

I'd like to think there's not. Honestly, I don't know.
 
I said "no" probably 60 or 70 pages ago.

It is obviously the correct answer.

As this thread has continued for more than I am willing to devote to catch up on,
can someone please summarize to me the current points of contention here?


If you build it they will/will not come.
 
Since not just anything activates a somatosensory receptor the information it transmits is relatively specific.

But I don't see the definition of a somatosensory receptor as being meaningful except as it relates to a conscious being.
 
What is wrong with the definition of information that RD offered?


Any such definition must be strong enough that some third person could take the definition and apply it and get the same results. I haven't seen any of RD's attempts come near to reaching that standard.
 
Status
Not open for further replies.

Back
Top Bottom