• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

No.

I'm talking about simple coherence to start with. Syntactically valid statements with well-formed referents.

Once you've mastered that, then you can try tackling semantics.

Amazing.

Every time I reply you supply evidence in advance of my intuition about yourself.

So you really do believe mathematics is required to understand Shakespeare.
 
Exactly.

When we define consciousness as the ideal result of a material process then we end up with an material world defined ideally.

When we define consciousness as the material result of an ideal we end up with a ideal world defined materially.

We need to outgrow the limitations of our language.

Or just label the behaviors and not worry about the impossible to decide. (IE both materialism and idealism can not be distinguished.)
 
Since when is a simple understanding of language synonymous with understanding someones description of there experience of self-awareness :boggled:



Not, If you don't see the difference between mathematics and Shakespeare.

!Kaggen , the first statement is nicely done, the second is a fallacy of construction and an appeal to emotion.

Now Pixy Misa is a bit curt, but has been through years of tehse debates, there are reasons Pixy states what they do, you might want to find them out and continue to elaborate your statements, which you did nicely in your first statement.
 
You can explain it all you want. Color or experience are not relevant. If brain states are identical to mental states, I don't need to ever see anything to know what "seeing color" is like.

Yes you do. You're constantly moving the goalposts between "complete" knowledge and less-than-complete knowledge.
 
There's a seamless connection between a USB webcam and the computer it's plugged into.


The first stage is the electronics for the camera which convert the light signals from the brain into the USB format. This is then taken from the USB connector and passed into the USB bus. The USB bus driver takes this data and passes it to the consuming program - which reconstructs the data back into some form which a different program again can display, or process. There's no direct connection, and there can't be for normal computer operation.
 
Yep.

Thats why I say, ask if building a brain out of neurons will result in consciousness.

If we can build a brain (and nervous system) out of neurons in such a way that it is physically identical to a human brain, then it will probably be conscious. However, if we leave out elements of the human brain then we can no longer be certain that consciousness is produced.

Until we can definitively identify how consciousness is produced, and indeed define consciousness precisely, then we can't claim that certain parts of the operation of the human brain are in some sense non-essential.

I have a suspicion it isn't actually the neuron-transistor swap that holds people up, but rather a fundamental doubt that humans should be able to understand their own consciousness. Or in other words, whether or not such knowledge should be restricted to God.

I have a suspicion that the boundless credulity of the Strong AI movement is caused by the inability to explain consciousness as it is. But assigning motivation to people with whom one disagrees is content-empty.
 
Let's back up.

Paul's point about replacing neurons with computers isn't contrary to physicalism. His question is why this type of a physical explanation won't work?

If there is something special about biology or something intrinsic to humans that precludes computer based consciousness then you haven't as yet identified it. We only have, to date, a model that doesn't make biology requisite.

Because computers are not neurons. Replacing the heart with a computer wouldn't result in a working circulatory system.

Either you replace the brain with a physically identical alternative - which proves nothing - or you replace it with something different - with unknown results. We don't have a functional specification for the brain, and we don't know what aspects of its function are disposable.
 
But I think you need to be clear just what it is that be believe needs an explanation.

We already have an explanation for the complex processing that we observe about consciousness.

So what we would need to be clear about is, just what is the other thing that needs explaining?

(And note I am not saying that there is no other thing that needs explaining, just that if there is, then we need to be clear about what it is we are trying to explain before embarking on a quest for the answers).

Yes, we do need to be clear about what it is that needs explaining. No, we are not clear about what it is that needs explaining. No, we cannot therefore assume that there is nothing to explain.

Human experience is something to which we can all testify. At the same time, we can't come up with a precise definition. I've conjectured that this may be impossible in principle. However, it will not be possible to explain the origin of something undefined.
 
I'm getting a serious deja vu here. I swear we've had this kind of conversation with other people who were expressing doubt that computers can replace neurons, implying that there must be more to it, but wouldn't come out with an explanation for their doubt.

Perhaps Westprog just has a nagging, nonspecific doubt. That's certainly allowed. But we've been snookered before, so we're suspicious.

~~ Paul

I have my doubts simply because sometimes changing the material changes the property of something - a bridge made out of balsa wood will not perform the same as a bridge made out of cast iron, even though they are both subject to the exact same "laws of physics".
 
The first stage is the electronics for the camera which convert the light signals from the [camera?] into the USB format.
Close enough... technically, the USB camera doesn't do the converting over the USB bus until it is signaled to transmit a packet by the USB host. This I'm not sure has an analogy with vision.

But our eyes have about 200 million light sensitive cells. The rods easily saturate and become useless leaving over 6 million cones to transfer information in daylight vision conditions (but let's not forget night vision either). There are various sorts of ganglial cells that immediately start processing the image in the eye.

For example, for color vision, we have three types of cones--L, M, and S. L is sensitive throughout our visual spectrum, but most sensitive near a yellowish green. M is very slightly blue shifted, but tapers off faster than L's sensitivity on the blue end. S is primarily sensitive to light on the blue end. Color information is derived differentially from these signals... the red-green opponent process as L-M, the white-black opponent process as L+M+S, and the yellow-blue opponent process as L+M-S. Interestingly enough, the USB bus also has differential signaling (though it's a different sort).

As far as I'm aware, the brain never gets a direct signal from the cones.
This is then taken from the USB connector and passed into the USB bus.
The optic nerve carries signals from the eye to the brain. This is a bundle of nerve fibers capable of transmitting about 1 million channels of information. Yes, just one million, not 6, not 200.
The USB bus driver takes this data and passes it to the consuming program - which reconstructs the data back into some form which a different program again can display, or process.
At the receiving end of the optic nerve is the visual cortex--in particular, V1. V1 reconstructs the data coming from the optic nerve, forming an image. And that's just the first layer of processing. Various other layers of the visual cortex also reproduce the image.
There's no direct connection, and there can't be for normal computer operation.
Okay, so there's no direct connection from eye to brain either.
 
Last edited:
I have my doubts simply because sometimes changing the material changes the property of something - a bridge made out of balsa wood will not perform the same as a bridge made out of cast iron, even though they are both subject to the exact same "laws of physics".

And replacing the wiring in your house with lengths of string just won't work at all. It's necessary to have a functional description of the system before you can swap parts.
 
Because computers are not neurons.
Why is that important? What is intrinsicly special about neurons?

Replacing the heart with a computer wouldn't result in a working circulatory system.
But we have cardio bypass.

Either you replace the brain with a physically identical alternative - which proves nothing - or you replace it with something different - with unknown results. We don't have a functional specification for the brain, and we don't know what aspects of its function are disposable.
You need to identify what it is about the human brain that can't be replicated. You are making unwarranted assumptions. I'm not saying anything conclusively. However, you are implying that there is something intrinsicly significant about the human brain that can't be or won't likely be replicated.
 
And replacing the wiring in your house with lengths of string just won't work at all. It's necessary to have a functional description of the system before you can swap parts.
Very poor example. You act as if neuroscientists know nothing of the human brain.

In any event, in WWII German engineers replicated British radar by reverse engineering a captured plane with radar equipment. They did so without having a functional description of the system.

We are not as in the dark as you aledge.
 
If we can build a brain (and nervous system) out of neurons in such a way that it is physically identical to a human brain, then it will probably be conscious.

This is what I don't get -- why on Earth do you need to include the "probably" qualifier in this statement?

That makes me think you are indeed mired in superstition (or solipsism). Because if it is physically identical to a human brain, then it will be (or have the capacity for) consciousness just like you or me.

Why can't you accept this?
 
The camera is a computer, Westprog.

Then it's internals carry out the same process of converting light into electrical signals, and converting those electrical signals into a standard format. Isolation and insulation are the principles of computer architecture. They are not the principles of the central nervous system.
 
Then it's internals carry out the same process of converting light into electrical signals, and converting those electrical signals into a standard format. Isolation and insulation are the principles of computer architecture. They are not the principles of the central nervous system.
What is isolation and insulation? Are you saying that the eye doesn't convert light waves into a signal?
 
Because computers are not neurons.
In this case, yes, they are.

Replacing the heart with a computer wouldn't result in a working circulatory system.
So what? How can you possibly believe that this is a meaningful response?

Teeny-tiny computer is to neuron as mechanical pump is to heart.

Either you replace the brain with a physically identical alternative - which proves nothing - or you replace it with something different - with unknown results.
The question was, if we replace the neurons in Paul's brain, one by one, with teeny-tiny computers that are functionally identical, what happens to his consciousness?

Stop wriggling, and stop raising category errors as though they were meaningful contributions.

We don't have a functional specification for the brain, and we don't know what aspects of its function are disposable.
Yes we do, and yes we do.
 

Back
Top Bottom