• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
Right. But the machine still adds.

No, it does not.

If you have a machine that takes two of something and combines it with two of another thing to create a group of four, then this is apparent to anyone who looks at it. We can all utilize the machine in the same way, regardless of the language we speak or if we can read or anything else like that.

On the other hand, if you don't understand the symbol system used in the display of a computer or digital calculator, it is impossible for you to determine that any symbolic "addition" has taken place.

No real addition has taken place. Rather, we have used a very different physical apparatus (which does not combine groups to get larger groups) to assist our imaginations.
 
A computer running a simulation would work perfectly.

No, it would not, because it is not designed to.

Running a simulation of a power plant is not the same as running a power plant.

I'll let westprog review the details of that fact for you.
 
Let me try to answer this again in slightly more detail because I think dlorde's way of reforming it as a single celled organism (how it would actually have originally evolved in the first place) is better than my clumsy attempt (and his example is better than the one I offer below because he adds more details but I want to focus on just a few aspects).

I think it does make sense to speak of what a touch means even with purely instinctual responses. It does not make sense to speak of the organism understanding the meaning, though. Clearly I am not referring to linguistic meaning here.

Let's say we have a single cell organism with three receptors -- one that senses simple sugars, which it uses for food; one that senses the presence of sulfur, which will kill it in high concentrations; and one that senses the presence of other organisms of its kind in order to facilitate DNA transfer (say, each organism excretes an identifying peptide). Each of these receptor proteins links to the cytoskeleton in order to produce movement either toward or away from a stimulus. The behavior of such an organism would change depending on its chemical environment. It only responds to three things with either approach or avoidance, but each of those signals has a meaning for this organism -- it is a particular type of signal that alters its behavior in order to enhance its survival.

This organism exists in an environment in which all sorts of environmental issues arise. It can run into streams of arsenic, but it has no receptors and arsenic does nothing to its internal function, so arsenic is not meaningful to this organism. It can be jostled by a swimming fish -- something that clearly changes the organism (the fish swimming by constituting information) -- but being jostled by a fish ends up having no positive or negative impact on this organism's survival. So, while the swimming fish is information (poor information, it turns out because many other things could jostle the cell), it is not meaningful information. We could imagine numerous other examples, but the point is that some data is meaningful to such an organism and some is not, where meaning arises in enhanced survival (the reason for the behaviors in the first place).

Of course we speak of many other types of meaning, but on what grounds does this not qualify as meaningful information for such an organism?

ETA:
Or, if it helps, replace meaning or meaningful with a synonym -- significant.

If you really want to understand this system, then simply drop any concept of "meaning" and focus on the physical interactions. That's the way to properly understand it.

What do you gain by adding the "meaning" tag?

And what on God's green earth does this have to do with understanding consciousness?

Surely it does not support the position that the behavior of consciousness has anything but a direct physical cause which, if we want to reproduce it artifically, must be generated by an equivalent physical cause. Just like every other action in the known universe.
 
No, it would not, because it is not designed to.

Running a simulation of a power plant is not the same as running a power plant.

I'll let westprog review the details of that fact for you.

This isn't a power planet. Your argument does not apply to everything and you cannot blithely toss it around. For instance, a simulation of an SNES runs SNES games perfectly.

We can easily make interfaces that let electronic devices transfer signals along nerves. A simulation of the brain would indicate what level of signal to send along those nerves. We can easily make interfaces that pick up signals coming along a nerve. A simulation of the brain could interpret those signals properly. A simulation of the brain works perfectly with a little interface work.

At this point, as far as you guys and your custom terminology for this thread is concerned, this simulated brain + interface is a model brain, and there's no reason why it would work. If you have an argument against it working, you'll have to say where exactly the model would fail. I personally do not see it, as it would gather information, the simulation would properly process that information (e.g. it would indicate how a real brain would take in information, function, and output information), and then it would send out information. There'd be no difference in the behavior produced by this and a normal brain.
 
Drachosor, the physicalists have absolutely no objection to a model brain incorporating a computer.
 
Drachosor, the physicalists have absolutely no objection to a model brain incorporating a computer.

Can we also agree that consciousness will not reside in the mere interface to a human body?

Or, let me put this another way. If we could attach cameras to a human brain for eyes (we actually CAN do this already), supply it with blood and such for energy, make a replacement voicebox, mouth, etc for it to speak (or an interface that translates such things for speakers), then would the human brain remain conscious outside of a body in your view?
 
Ichneumonwasp said:
Guess your sentence "meaning must arise from interactions amongst the single substance that is, if monism can make sense" got me going. The idea of running an algorithm of thought on a computer seems the very illustration of "dualism" to me.

Or like I asked before, where does the math structuring the logic gates come from? Mathematical realism - that mathematical entities exist independently of the mind - seems indistinguishable from Platonism which is certainly "dualism".

On the other hand, if the maths are invented your single substance seems out the window.


Not really, no. Running an algorithm is just an action. The simulation, would just be a way of helping folks see the relationships, but there is just an action going on in the computer no matter what els we are talking about.

It seems to me that some folks are getting caught up in the different 'levels of discussion' since talking about the program is just an easy way of discussing the movements of electrons in a computer. I don't see how anyone gets dualism out of any of this.

How is a single substance out the window if maths are invented? Math is just a way of describing relationships. And, yes, mathematical realism is a form of dualism. But none of this argument concerns mathematical realism. That is much closer to Beth's position.


Something I don't understand about this: how to reduce to a computational description what it is (normatively) about that computational description - its truth and relevancy- that make it useful and distinguishable from noise?

How does one formalize the making of such judgements?
 
Can we also agree that consciousness will not reside in the mere interface to a human body?

I don't know if you're referring to my posts, but I merely disputed the claim that you could interact with a conscious system "in the same way you can interact with a human" if the system didn't have a human body. That's very different from saying it wouldn't be conscious.

Or, let me put this another way. If we could attach cameras to a human brain for eyes (we actually CAN do this already), supply it with blood and such for energy, make a replacement voicebox, mouth, etc for it to speak (or an interface that translates such things for speakers), then would the human brain remain conscious outside of a body in your view?

Huh? That sounds like a body to me.
 
I don't know if you're referring to my posts, but I merely disputed the claim that you could interact with a conscious system "in the same way you can interact with a human" if the system didn't have a human body. That's very different from saying it wouldn't be conscious.



Huh? That sounds like a body to me.

I was responding to Piggy. Any consciousness is going to have a body...even a simulation is made of matter. You don't need a human body to interact, btw. We're interacting now and for all you know I'm just some box of electronics (well, given our level of technology that is unlikely, but you get my point). Note I wasn't proposing any kind of mobility with those attachments, not that it really matters...I was making that analogy because that's what a computer box could easily have if it was running simulation.
 
If you really want to understand this system, then simply drop any concept of "meaning" and focus on the physical interactions. That's the way to properly understand it.

What do you gain by adding the "meaning" tag?

And what on God's green earth does this have to do with understanding consciousness?

Surely it does not support the position that the behavior of consciousness has anything but a direct physical cause which, if we want to reproduce it artifically, must be generated by an equivalent physical cause. Just like every other action in the known universe.


There is a group of folks who insist that there is no way to get there from here -- that meaning can never be explained without a conscious observer. We must be able to build meaning from the ground up in some way or dualism is the only other option. I am trying to provide the components that others identify as crucial to meaning from the bottom up.
 
No, it does not.

If you have a machine that takes two of something and combines it with two of another thing to create a group of four, then this is apparent to anyone who looks at it. We can all utilize the machine in the same way, regardless of the language we speak or if we can read or anything else like that.

On the other hand, if you don't understand the symbol system used in the display of a computer or digital calculator, it is impossible for you to determine that any symbolic "addition" has taken place.

No real addition has taken place. Rather, we have used a very different physical apparatus (which does not combine groups to get larger groups) to assist our imaginations.


I disagree vehemently. Addition still took place because it was defined so ahead of time by the person who programmed the algorithm. The person looking at it who does not understand the symbols simply does not understand that addition has taken place.

So, are we to say, if that person is taught what the symbols mean, suddenly addition has taken place? The computer suddenly did something new when the person understood what the symbols mean?

ETA:

Or as Blobru put it, using a watch and time as analogy, the computer did sums but did not tell sums.
 
Last edited:
Now what's your answer to my question?
There is as much meaning, etc., for the simple model as there is for the simple organism whose function it mimics. It seems to me that 'meaning' is a concept that requires a certain level of complexity to recognise, i.e. a level of complexity that is capable of conceptualising. We use it to describe how complex systems (such as living systems) respond to input. However, much of our language in this area consists of conceptual generalisations - abstractions for our own use, e.g. 'living'.

If you find the statement meaningless, your choice.
I don't understand the statement. I was asking you to explain what you meant. Would you care to do so?
 
I disagree vehemently. Addition still took place because it was defined so ahead of time by the person who programmed the algorithm. The person looking at it who does not understand the symbols simply does not understand that addition has taken place.

So, are we to say, if that person is taught what the symbols mean, suddenly addition has taken place? The computer suddenly did something new when the person understood what the symbols mean?

ETA:

Or as Blobru put it, using a watch and time as analogy, the computer did sums but did not tell sums.
So which is it?

Blobru said the same thing I did albeit more succinctly.
 
So which is it?

Blobru said the same thing I did albeit more succinctly.


Blobru said the same thing I am saying. The computer does sums but does not tell sums. That is what I have been saying all along. The act of doing sums continues whether anyone watches are not.

ETA:
Perhaps I need to contrast this with the falling abacus again. A falling abacus could 'do sums' only because someone looks at what 'comes out the other end' because it does not follow the algorithm for doing the sum. It incidentally does such a thing, so its information is random. That is not analogous to a computer doing sums by following an algorithm. The information that emerges from both are equally meaningless to anyone who doesn't look at them, but the processes are not analagous.
 
Last edited:
If liquid carbon tetrachloride is a good model to use in the place of water for your purposes, then it's a good model to use in the place of water for your purposes. The question of whether or not it's "wet" is irrelevant, unless of course you need your model of water to be wet in order for it do to whatever you need it to do.

Ah, so you are going to stop parroting "is simulated water wet?" all over the thread?

Since by your own above admission, it is irrelevant in some cases?
 
I fail to understand any of that.

If a squirrel's brain does the same kind of thing my brain is doing when it's conscious, then a squirrel is conscious. If not, then it's not.

Toasters are not designed and built to be conscious, so they are not.

Removing parts from a squirrel to turn it into a toaster is... well, I have no clue what that is.

You sure do put alot of thought into this issue, for someone who makes hundreds and hundreds of posts on it every year.
 
This thread is about consciousness, not self-reference.

"Self-recognition" is a very mushy term that could mean quite different things in different circumstances.

When my truck's computer monitors the engine's performance, you could say that this is "self-recognition" on the truck's part, but this has nothing to do with consciousness.

Ok. This is where I stop.

That statement is literally as stupid as saying "this thread is about engines, not internal-combustion."

Whatever piggy -- since you know literally nothing about consciousness by your own admission, yet respond to every discussion as if you are an expert on what is and is not relevant, then you should have fun making progress on this issue.
 
Something I don't understand about this: how to reduce to a computational description what it is (normatively) about that computational description - its truth and relevancy- that make it useful and distinguishable from noise?

How does one formalize the making of such judgements?

This isn't a big deal, you just can't reduce everything to a computational description at the same time because, obviously, there is no way to then include the reduced description in the reduced description.

In other words, you can't see the back of your own head if you are also looking at everything else.
 
Blobru said the same thing I am saying.

The computer does sums but does not tell sums. That is what I have been saying all along. The act of doing sums continues whether anyone watches are not.
Now if you'll take the next step and agree that without the 'telling' it's as meaningless as a landslide. Machines don't recognize or care about the meaning of design or not-design. If a fault occurs the computer won't be a bit bothered if the output says 2+2=5.

I'm still not sure if this is semantics or a real disagreement between us.
 
Now if you'll take the next step and agree that without the 'telling' it's as meaningless as a landslide. Machines don't recognize or care about the meaning of design or not-design. If a fault occurs the computer won't be a bit bothered if the output says 2+2=5.

I'm still not sure if this is semantics or a real disagreement between us.


I always said it was meaningless if no one observes the output. But the algorithm is still followed. Unless addition is defined as the meaning one can derive from following an algorithm, then my point stands. My understanding is that addition is defined as the intentional process of following a particular algorithm to arrive at a sum; the adding still takes place whether anyone understands the output or even looks at it. That algorithm is defined by a programmer with a computer. It is still observer dependent in that sense because an observer must input the algorithm in the first place; the programmer provides the intention that falling rocks or falling abacuses lack.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom