• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
A simulation which can replace even a single neuron cannot do so with just Turing machine functionality. I've explained this over and over. The objections to this position have been absurd. A Turing machine doesn't interact with its environment. It's a closed system, which performs computations. It's possible to simulate a neuron on a Turing machine, but such a simulation, purely computational in nature, could not possibly be used to replace a real neuron.

Real Computers aren't Turing Machines, so you're really just making a Straw Man here.

Well, the argument so far is about the Turing model. If you want to argue on a different basis you can choose sides and fight your corner on that basis.

That's not my argument and never has been. I'm not agreeing to your false dichotomy.
 
Last edited:
With a very broad definition of information, you could consider a lot of physical interactions to be information processing, though not all (given entropy).

Can you give an example of an interaction between two entities where information is not exchanged?
 
Real Computers aren't Turing Machines, so you're really just making a Straw Man here.

If that's the case then Church-Turing doesn't apply in the real world. Can we agree on that?

That's not my argument and never has been. I'm not agreeing to your false dichotomy.

It's not a false dichotomy. There's an argument that a Turing machine computation is necessary and sufficient to produce consciousness. If you don't agree with that then your argument is not with me.
 
It's an objective fact that computers work with electricity of a certain voltage and amperage running through them, and that they have logic gates opening and closing at submicrosecond speed. Rocks don't.

Do you think a silicon rock has no tiniest trace of the above features?
 
Something that functions like a neuron using different materials, I.E. there is something like an action potential and synaptic channels etc but there are no organelles or any other traditional "cell" stuff. And of course the materials are synthetic and not biological.

Do you accept that something that can replace a neuron, no matter what materials are used, cannot be a purely computational device?
 
westprog said:
What were you replacing neurons with in the original scenario if not already a 'computer controlled neuron'?

A simulation on a computer is what you'd need, with I/O ports to the real brain.

At the moment I suspect size will be a problem for any artificial neuron that isn't biochemical engineering.

But that's not the critical issue.

A simulation of neuron behaviour is a very useful thing if you want to study how neurons behave. I'm sure that such a thing has been written many times. However, it wouldn't recognise and transmit real-time electrochemical signals in the way a real neuron would. Such a thing is not necessary in a simulation. It's not what a simulation is for.

A component which could replace an actual neuron would be an entirely different thing to a simulated neuron. A simulated neuron can receive a data packet and produce another data packet, giving us insight into how nerves work. A replacement neuron would have to actually do the work.

This is the point that Pixy totally misses in the absurd claim that a brain can be replaced not just by some kind of computer, but any computer. It involves a total misunderstanding of what the brain actually does by abstracting its behaviour into computational terms.
I may have missed the curve, but thought RD was suggesting replacing a neuron in a living, working, brain with an artificial neuron.
 
Something that functions like a neuron using different materials, I.E. there is something like an action potential and synaptic channels etc but there are no organelles or any other traditional "cell" stuff. And of course the materials are synthetic and not biological.

The point is, if you agree that a brain will still be conscious when you replace some neurons with artificial ones, and then computer controlled proxies, why would just a full on replacement of the whole thing not be conscious? Where is the line drawn?
Neuron by neuron replacement (now you are accomplishing photon exchanges in previously defined architecture) should work until the living brain can no longer handle the poking, cutting, prodding, etc, and dies.

Do you actually suggest your course of action will ever be ethically acceptable?
 
This is exactly what westprog said 2 years ago, and this is exactly why I shifted focus to cells vs. rocks rather than computers vs. rocks.

Why?

Because cellular organisms exist with or without humans. Bacteria don't give a hoot whether humans even recognize them or not. And bacteria can tell a rock from other bacteria. They do it all the time. Thats how they reproduce, thats how they communicate.

Thus, there is an objective difference between bacteria and rocks. Otherwise, bacteria wouldn't know how to interact with another bacterium differently than they interact with a rock.

Of course, you could claim that this is still a "subjective" difference, but then you have admitted that bacteria experience subjectivity!!!
Ha Ha. Note that my request concerns rocks and computers.

Yes. lifeforms are different, and they don't require human subjectivity to "know" it; the cells follow the 7 attributes of life just fine. There's some SRIP that works (in a specified substrate).

Making any headway on the computations/sec needed to simulate a cell at planck dimension granularity?

ps. My worldview has no problem with the concept that cells exhibit subjectivity, and would be amazed if that wasn't the case.
 
Last edited:
It's possible to simulate a neuron on a Turing machine, but such a simulation, purely computational in nature, could not possibly be used to replace a real neuron.

Thats fantastic.

Who cares?

Pixy is talking about simulating the whole brain, including input on a Turing machine.

I am talking about simulating a single neuron with a Turing equivalent machine that, because it is real, doesn't suffer from your stupid "time domain" objection.

Two different issues, two different arguments, try not to get lost so easily.
 
Do you accept that something that can replace a neuron, no matter what materials are used, cannot be a purely computational device?

Of course not, because purely computational devices do not exist except in fairy land.

If time is viewed as an ordered sequence of causal events -- and it should be, because that is what it is -- then obviously for a sequence of events to be replaced with an equivalent one the order and sequence needs to be maintained.

If the causal sequence is what we call our reality, then obviously a replacement neuron needs to be able to interact with that causal sequence -- you could call it a "coupling" requirement. The causal sequences need to be able to "couple."

Pixy is talking about replacing the whole sequence with a different one -- a simulation. This is just fine, because there is no coupling problem in that case either. But I am not interested in that issue right now.
 
Drachasor said:
Without some living entity to provide subjectivity, I'd consider that meaningless.

Are you saying nothing objective exists?
Not exactly, but that's not within this topic, nor relevant to it.

Living things exist, and at our level have arguments about subjectivity and objectivity.
 
Neuron by neuron replacement (now you are accomplishing photon exchanges in previously defined architecture) should work until the living brain can no longer handle the poking, cutting, prodding, etc, and dies.

Do you actually suggest your course of action will ever be ethically acceptable?

Yes, I would certainly do it to myself if that is the only way to extend consciousness.

But that is irrelevant -- the issue is whether or not a point will be reached where the brain ceases to be conscious and instead is just a "machine" or "simulation" or whatever you want to call it.

Assuming it never dies, and keeps talking to you, is that point reached? Or do you agree that if you replaced the whole thing, little by little, part by part, until the whole thing was in a simulation, and it was still talking to you as if nothing happened, that we would still have genuine consciousness?
 
Objective? In your subjective opinion (and in mine too) yup.

And when no subjective opinions are available both make fine doorstops; see the problem?



Then nothing is different from anything else unless it is being experienced, which is a pretty useless concept, though perfect for solopsism.
 
Ha Ha. Note that my request concerns rocks and computers.

Yes. lifeforms are different, and they don't require human subjectivity to "know" it; the cells follow the 7 attributes of life just fine. There's some SRIP that works (in a specified substrate).

But if we can find ways that lifeforms are different from rocks, can't we find ways that other things are different from rocks as well?
 
I may have missed the curve, but thought RD was suggesting replacing a neuron in a living, working, brain with an artificial neuron.

Yes, that is what I am suggesting.

Westprog ignores the issue though because he/she has no valid argument that can deal with it.
 
Status
Not open for further replies.

Back
Top Bottom