• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Wait a moment (I thought I might be making a mistake putting all that stuff in a single post).

I started with a black box that replaces the visual cortex, and you appeared to agree that if it could interface with the incoming and outgoing neurons appropriately and reproduce the same outputs given the same inputs, that this could work - the patient could see and remain conscious.

I then suggested a number of scenarios based on that, replacing more subsystems in the same way, and/or extending the scope of the original black box to encompass more of the brain function. Finally I suggested replacing the whole brain with a black box (half seriously, half in jest).

My purpose was to see, given you accepted visual cortex replacement (didn't you?), whether you feel there is a point beyond which replacing those subsystems with black boxes would 'break' consciousness, or whether you feel it is possible to have a human-like black box consciousness that doesn't necessarily function in terms of artificial neurons (this because of previous suggestions that the physical structure would need to be emulated).

These questions obviously require some knowledge and understanding of the functional architecture of the brain, and some idea or ideas of which parts of that architecture might be involved in consciousness (e.g. the frontal cortex, but not the cerebellum - which is an obvious black-box candidate). I assume you have enough knowledge of and opinions about these things, given your steadfast and authoritative statements in the thread.

In other words, I'm curious to know what extent of the brain you think it might be theoretically possible to replace in the way described, and still maintain conscious function. What do you think the constraints are, where do you feel problems might lie, etc. (given that we could produce such black boxes and connect them) ? I think it's germane to the thread, but I'll understand if you don't want to tackle it.

Part of my motivation is that I think it may soon be possible to do this kind of replacement for real with very simple brains - to monitor the substantive inputs and outputs of a neural subsystem, train a learning system to reproduce the functionality, ablate the monitored tissue, and use the monitoring probes to enable the learning system to replace it. Clearly it's a long way from the pure speculation above, but it was the stimulus for it (ha-ha).

If you accept that the brain is a purely physical process - and that certainly seems the only scientific basis on which to proceed - then it should be taken as read that it would be possible to replace any portion of it with a functionally equivalent artificial component. However - we don't know exactly what functionality is required for the brain to do what it does. Indeed, the only thing that would do exactly what the brain does is an exact replica of the brain. It's necessary to decide what can be discarded without losing anything important.

It should also be noted that replacing the artificial black box network with a computer implementation of same will lose functionality.
 
Your line of reasoning is analogous to reasoning that we cannot build a flying contraption because all of the flying devices we know of are biological.

To be fair, he did explicitly qualify it as a 'hunch':
...My HUNCH is based on the fact that AS FAR AS WE KNOW HERE AND NOW (and not in imagined possible other realms and fiction) there is SO FAR no other physical process that has produced consciousness other than CEREBRAL CORTEXES and not even all of them at that.

Having said that, history has shown that such hunches are a poor guide to what is possible.
 
Me too :D
The difference does seem to cause a remarkable amount of confusion.

I think that confusion stems from a refusal to accept that for many functions emulation is the same as simulation.

Case in point, if you have 4 neurons in series, connected to other neurons at both ends, and you replace all 4 with artificial emulators, the overall network should function the same.

Now replace the 4 with a single emulator, that internally functions by simulating each of the 4 original neurons. If an emulator works by simulating anything, those simulations are also emulations.

I think even piggy would agree that this will also preserve the properties of the network.

Yet when we extrapolate, there is some arbitrary point where people start to think the internal simulations cease to also be emulations.
 
Indeed, the only thing that would do exactly what the brain does is an exact replica of the brain.
Yes, but the point of contention is what we mean by 'exact replica' given the context. I have a music CD copied from another music CD. The data is an exact replica of the original data. The medium has exactly the same format but slightly different materials. Is that an 'exact replica' for the purposes of listening to the music? I would suggest it is. OTOH, if I had the same recording on vinyl or tape, it would not be an 'exact replica' for the purposes of listening, because the data is significantly different enough to hear the difference (although not everyone would necessarily agree).

It's necessary to decide what can be discarded without losing anything important.
Yes; I'm asking for speculation, informed if possible.

It should also be noted that replacing the artificial black box network with a computer implementation of same will lose functionality.
In what respect? surely not if the black box network is itself a computer implementation, but I assume you mean other than that?
 
Yes, but the point of contention is what we mean by 'exact replica' given the context. I have a music CD copied from another music CD. The data is an exact replica of the original data. The medium has exactly the same format but slightly different materials. Is that an 'exact replica' for the purposes of listening to the music? I would suggest it is. OTOH, if I had the same recording on vinyl or tape, it would not be an 'exact replica' for the purposes of listening, because the data is significantly different enough to hear the difference (although not everyone would necessarily agree).

In the case of the CD, it's fairly well agreed what the nature of the data is - and what can be disposed of, and what is essential. We need the actual numbers stored as pits on the CD, and we need some scheme for decoding those numbers. We could, without lost of information, type out the numbers on sheets of paper, for example.

The problem with the brain is that we don't know, for a fact, what is and isn't essential. I think most people would be happy enough to have a vein replaced after a stroke. The mechanisms that do the thinking would be another matter.

Yes; I'm asking for speculation, informed if possible.


In what respect? surely not if the black box network is itself a computer implementation, but I assume you mean other than that?

I mean a computer implementation that runs a computation supposedly equivalent to what is running in the brain - Turing-equivalent, technically. Such a computation wouldn't be running on artificial neurons, and it wouldn't be "plug-compatible" with the brain. The disagreement is not about the artificial brain - I think that most people would agree with that as likely workable, in concept anyway. It with the idea that the functionality of the said artificial brain could be expressed, without lost of functionality, as a computation, and said computation could be implemented on any computing hardware.
 
The problem with the brain is that we don't know, for a fact, what is and isn't essential. I think most people would be happy enough to have a vein replaced after a stroke. The mechanisms that do the thinking would be another matter.

So you're just arguing from ignorance, then. We don't know yet, but neither have we run across anything unknowable yet. At least not that we know of. What we do know is that what we know so far about what is and isn't essential is all happily simulatable, even the nonessential bits. It stands to reason the trend will continue. Of course, if you know of something we don't know that is unknowable, I at least would liked to know of it.
 
It should also be noted that replacing the artificial black box network with a computer implementation of same will lose functionality.

I'm not sure what you mean by "functionality" here. If you mean "will produce the same outputs given the same inputs", then you are wrong. A computer implementation can have exactly the same functionality.
 
No, that conclusion actually isn't implicit in what I was saying.

Of course that's the case with information processing.

But I hope you're not saying that we can do something equivalent with physical systems and still have them operate the same way they do now.

That was my point. What you were saying was in no way a response to what I said, which was a very simple and clear statement about the nature of computation. It was phrased as if it was a response though. Such replies make communication difficult.
 
I'm not sure what Piggy was claiming. However, back when I was only myself, I don't believe that I ever claimed that a simulation wasn't conscious. I simply claimed that we have no evidence that a simulation is conscious. If we start with something that's conscious in the first place, then indeed it will be conscious when pretending to be something else. If we start with something that isn't conscious, then I see no reason that it becomes conscious by pretending to be someone else.

Congratulations, westprog. Now your avatar confuses the hell out of me. I'm sure yy2bggggs will be happy.
 
So you're just arguing from ignorance, then. We don't know yet, but neither have we run across anything unknowable yet. At least not that we know of. What we do know is that what we know so far about what is and isn't essential is all happily simulatable, even the nonessential bits. It stands to reason the trend will continue. Of course, if you know of something we don't know that is unknowable, I at least would liked to know of it.

I'm not arguing from ignorance. That has a specific meaning - claiming that a position is true due to lack of evidence against it. I think it's fairly clear which side is doing that.
 
I'm not sure what you mean by "functionality" here. If you mean "will produce the same outputs given the same inputs", then you are wrong. A computer implementation can have exactly the same functionality.

Can have. The claim is that it always will have.

There's a small claim - that hardware can be inserted in the brain to replace any given part of it. (In principle, of course - not in practice).

There's a large claim - that if the entire brain were mapped in such a way, that a computer program could result, which could, in principle, be run on any sufficiently powerful computer. It's quite obvious that any arbitrary computer can't be just stuffed into the brain cavity. Therefore it cannot have "exactly the same functionality".
 
Congratulations, westprog. Now your avatar confuses the hell out of me. I'm sure yy2bggggs will be happy.

He started it. Or I did... I'm not sure. I know that me/Westprog didn't, but the other me might have.
 
Can have. The claim is that it always will have.

There's a small claim - that hardware can be inserted in the brain to replace any given part of it. (In principle, of course - not in practice).

There's a large claim - that if the entire brain were mapped in such a way, that a computer program could result, which could, in principle, be run on any sufficiently powerful computer. It's quite obvious that any arbitrary computer can't be just stuffed into the brain cavity. Therefore it cannot have "exactly the same functionality".

I'm not clear on what you're saying here. Of course it wouldn't be "any arbitrary computer", it would have to have software to emulate the multi-processor brain, have sufficient storage, and to run in real time be sufficiently fast. But if we eliminate the time constraint by slowing down the I/O (slowing down the universe, I suppose, which we can do if the universe is simulated), then any processor chip would be sufficient, given enough memory and a memory addressing scheme that makes it all available. Is an otherwise equivalent brain less conscious because it thinks more slowly?
 
Let me put it this way:

A map is not the territory, but a map of a map is a map.

And consciousness is a map.

Consciousness is not a thing, it's an event. It's a behavior of a thing (a brain).
 
If we start with something that's conscious in the first place, then indeed it will be conscious when pretending to be something else. If we start with something that isn't conscious, then I see no reason that it becomes conscious by pretending to be someone else.

Well there you go.

But a lot of folks haven't thought out the full implications of that fact, which eventually leads you to the conclusion that consciousness can't be programmed, it can only be built.
 
Well there you go.

But a lot of folks haven't thought out the full implications of that fact, which eventually leads you to the conclusion that consciousness can't be programmed, it can only be built.

If I wasn't learning things from all the posts people are making, showing you how utterly wrong you are on all this stuff, I would have stopped reading this thread long ago.

That is how frustrating your empty arguments have become piggy.

Take this latest one for instance. Given that any computer we program must be "built" first by assembling the hardware, and the "programming" is done by changing physical properties of the hardware, and given that we know the brain typically develops over time more by changing synapse strength rather than actually growing neuron connections, which is very much closer to how we "program" computers rather than to how we "build" computers, but the brain must be "built" before it can be "programmed" anyway, I don't know wtf you are talking about when you try to distinguish between "building" something and "programming" something. According to *any* formal definition you could come up with, "programming" is merely fine-grained "building," -- there is zero qualitative difference between the two.

For someone who is trying to bark up the tree of equivocating physical processes when no "observer" is present, you sure do pull a lot of arbitrary and unexplained distinctions out of you-know-where.
 
Last edited:
But a lot of folks haven't thought out the full implications of that fact, which eventually leads you to the conclusion that consciousness can't be programmed, it can only be built.
I'm confused. The way I read this, you're claiming that you can derive the fact that consciousness cannot be programmed, but must be built, from the fact that you cannot see how a simulation would be become conscious.

Is this your claim?
 
Status
Not open for further replies.

Back
Top Bottom