• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
That is three saying it would, nobody yet saying it wouldn't.

How about the others here?

Well, since the supposition is that it is "sufficiently detailed", that presupposes the answer, doesn't it? If the assumption is that such a thing is possible, then naturally the answers will reflect that.

Replace "human brain" with "hydroelectric scheme" and "sense data" with "water pressure". I think we'd agree that such a program would reflect the behaviour of a hydroelectric scheme. I think we'd all also agree that such a program would not be able to generate electricity.

In fact, we could insert almost any physical system into the simulator, and we'd end up with something that behaved like that system, and wasn't that system.
 
Yes, and it doesn't matter what the instant velocity of every molecule in a hurricane is.
For determining the next state of the system?

I am pretty sure it does. Even tiny differences in one state will lead to a huge divergence in just a short time.

But for a logic chip the state, as measured in low/high will absolutely determine the next state of the system, measured the same way.

Small differences in voltages will not lead to huge divergences in the process.

If a logic chip worked like a hurricane then we would not be communicating.

Incidentally, how about answering my question above about the brain simulation? (OK, strike that you did).
 
How does the operation of the program result in an actual phenomenon -- a real-world event -- without an additional mechanism to make the phenomenon occur?

That's the Achilles heel of IP-only theories.

The Turing model is very precise. That's what makes it so hugely useful in the world of computing. However, it totally excludes any possibility of interaction with the real world. Any such interaction has to happen outside the realm of the Turing model.

That's all we need to exclude the contention that a Turing machine can do anything a brain can do. A brain can interact with the world. A Turing machine cannot.

Of course, any actual implementation of a Turing machine will interact with the real world. But such interaction is not covered by the Turing model, and the Turing model will not predict how such interaction will take place.
 
Well, since the supposition is that it is "sufficiently detailed", that presupposes the answer, doesn't it? If the assumption is that such a thing is possible, then naturally the answers will reflect that.

Replace "human brain" with "hydroelectric scheme" and "sense data" with "water pressure". I think we'd agree that such a program would reflect the behaviour of a hydroelectric scheme. I think we'd all also agree that such a program would not be able to generate electricity.

In fact, we could insert almost any physical system into the simulator, and we'd end up with something that behaved like that system, and wasn't that system.
But it would behave like the system?

For example it might get onto the internet and start debating about consciousness?

It might, like Interesting Ian, argue that "I have incorrigible certain knowledge that I am conscious" - that sort of behaviour?
 
Well, since the supposition is that it is "sufficiently detailed", that presupposes the answer, doesn't it?
Actually I missed this part. No, that is not the assumption - that is the question.

The assumption is that there is sufficient computing power to model how the components of the brain interact with each other.
 
The Turing model is very precise. That's what makes it so hugely useful in the world of computing. However, it totally excludes any possibility of interaction with the real world. Any such interaction has to happen outside the realm of the Turing model.

That's all we need to exclude the contention that a Turing machine can do anything a brain can do. A brain can interact with the world. A Turing machine cannot.

Of course, any actual implementation of a Turing machine will interact with the real world. But such interaction is not covered by the Turing model, and the Turing model will not predict how such interaction will take place.
The brain operates on a stream of input data. It does operate more or less in real time, but the time we experience is an artifact of brain processing.

The only real difference is that the data input stream for the brain comes in from a number of different sources in parallel.

But there is no reason why a single tape cannot represent a set of parallel data sources.

So I don't see any reason to suppose the Turing model could not predict how such an interaction could occur.

To me time is not the problem.
 
For determining the next state of the system?

I am pretty sure it does. Even tiny differences in one state will lead to a huge divergence in just a short time.

But for a logic chip the state, as measured in low/high will absolutely determine the next state of the system, measured the same way.

No, that's not precisely the case. The state will probably determine the next state of the system, but there is a margin for error. There's a finite failure rate on all chips.

Small differences in voltages will not lead to huge divergences in the process.

If a logic chip worked like a hurricane then we would not be communicating.

The same goes for any man-made device.

However, this is not what was in the original spec. All that you said was, IIRC, a well-defined state that could be symbolically represented. Now you want state transitions to be predictable.

But predictability is simply a statement about the knowledge we have about a system. The change of state of a hurricane is just as deterministic as the change of state of a silicon(e) chip. It's just difficult for us to predict. But that doesn't mean that different physical laws apply.
 
But it would behave like the system?

For example it might get onto the internet and start debating about consciousness?

It might, like Interesting Ian, argue that "I have incorrigible certain knowledge that I am conscious" - that sort of behaviour?

Well, it might. But it might not. Same as with any other simulation, we find out how good it is by trying it out.

And if such a brain simulator were to post on JREF saying "I'm taking the dog for a walk in the park" we would know that there was no dog, and there was no park - just a simulation. And if a human being sitting alone in an empty room were to say the same thing, we would regard him as delusional and untrustworthy.

Connect up a robot that can actually walk in the park, and we might have something a bit closer.
 
The brain operates on a stream of input data. It does operate more or less in real time, but the time we experience is an artifact of brain processing.

The only real difference is that the data input stream for the brain comes in from a number of different sources in parallel.

But there is no reason why a single tape cannot represent a set of parallel data sources.

So I don't see any reason to suppose the Turing model could not predict how such an interaction could occur.

To me time is not the problem.

Yes, a Turing machine can represent, and predict - but it cannot interact. A brain can. When fundamental functionality is missing, one has to question whether the Turing machine is the right model to use. The brain has to respond to certain stimuli within a specified time limit. A Turing machine cannot do so.

The claim is that because the brain performs calculations, then if those same calculations can be performed by a Turing machine, then a Turing machine can be functionally equivalent. But "performing calculations" does not fully describe what the brain does. It's essential function is real-time, and a comprehensive computer model of the brain has to be real-time as well.
 
If a logic chip worked like a hurricane then we would not be communicating.

Yes, for a device to transfer information between human beings, it needs to have human-predictable behaviour. But the point about the computational hypothesis is that it deals with the objective behaviour of a device independent of its interaction with human beings.

IMO, what is important about a computer is its capacity to transfer data back and forth to human intelligences. If it is considered in respect of its own consciousness, then the ability of human beings to predict what it does is neither here nor there. The behaviour of a hurricane is just as deterministic as that of a computer.
 
No, that's not precisely the case. The state will probably determine the next state of the system, but there is a margin for error. There's a finite failure rate on all chips.
So you can't have a perfect computer. So what?
The same goes for any man-made device.

However, this is not what was in the original spec. All that you said was, IIRC, a well-defined state that could be symbolically represented. Now you want state transitions to be predictable.
But that was on my original spec, which you keep changing.

Here it is again:

physical process: a deterministic or a random process where each state's measurement does not necessarily have a precise symbolic representation

computation: a deterministic process where each state's measurement can have a precise symbolic representation.​

And I later clarified precise as exact.

So it is a deterministic process where each state's measurement can have a precise symbolic representation then clearly the state transition is predictable.
But predictability is simply a statement about the knowledge we have about a system. The change of state of a hurricane is just as deterministic as the change of state of a silicon(e) chip. It's just difficult for us to predict. But that doesn't mean that different physical laws apply.
Who said different physical laws apply?????

Don't you know that too much straw is a fire hazard?

PS - thanks for continuing to make sly digs at my spelling error. Keep up the good work.
 
Sure, and 2 + 2 = 5, for sufficiently large values of 2.



You are being so disingenuous here that it's hard to continue to take you seriously.

Do you honestly think that any serious AI researcher or advocate thinks that a bare theoretical Turing machine, uninstantiated, could exhibit consciousness?

No, what they think is that it's only necessary to instantiate a Turing machine, and not to instantiate the real time elements. Any implementation of a Turing machine will have real-time elements in its operation. That's not the point. It's that the Turing model is not a sufficient description of the basic functionality of the brain. If you use an inadequate model, then you can't make any reliable predictions.

The point of a model is to consider all essential aspects of the thing being modelled. You don't ignore an essential element and say "Ah, never mind, we'll sort that out when we build a real example."
 
You're going to have to unpack that, my friend.

What makes consciousness such a special function that it alone requires no executive mechanism to make it happen?

And if it is special, please explain how in the world that happens.

This where you loose me Piggy, I don't understand your definition of 'executive'. Now we can agree that consciousness is a spectrum of different events hopefully, an amoeba is probably not , flat worm is somewhat, and so on. Different levels of neural organization will fall along different places on the 'consciousness' spectrum'.

Now are you saying that the executive is a volitional component? Or that it represents another level of neural networking?

ETA: Or that we don't have that part of the neural network nailed down? (from reading later post.)
 
Last edited:
So you can't have a perfect computer. So what?

But that was on my original spec, which you keep changing.

Here it is again:

physical process: a deterministic or a random process where each state's measurement does not necessarily have a precise symbolic representation

computation: a deterministic process where each state's measurement can have a precise symbolic representation.​

And I later clarified precise as exact.

So it is a deterministic process where each state's measurement can have a precise symbolic representation then clearly the state transition is predictable.

Deterministic does not mean predictable. I repeat - the behaviour of the hurricane is as deterministic as the behaviour of the silicon chip. The predictability of a system is a function of our knowledge of the system and our ability to model its behaviour.

Who said different physical laws apply?????

Don't you know that too much straw is a fire hazard?

If you are referring to the objective behaviour of physical systems, then the fact that the same physical laws apply means that they are equally deterministic. Being deterministic is an objective property of the system. Being predictable is a statement about our knowledge of the system. If you're describing objective properties of the system, then it's necessary to accept that its behaviour is determined solely by the laws of physics, not by what we know about it.

PS - thanks for continuing to make sly digs at my spelling error. Keep up the good work.

Well, maybe you shouldn't have been so quick to correct me.
 
You can statistically model a hurricane with a much smaller and simpler system and produce a reasonably accurate prediction of its future behaviour.

The only way to model a computer is with a larger and more complex computer. Statistical models do not work at all.
 
Yes, a Turing machine can represent, and predict - but it cannot interact. A brain can.
A brain interacts with what? Data. A TM interacts with data.
When fundamental functionality is missing, one has to question whether the Turing machine is the right model to use. The brain has to respond to certain stimuli within a specified time limit. A Turing machine cannot do so.
Many brains don't respond within certain limits. Mine for example - hopelessly slow processing time - my kids have unfortunately inherited this from me.

But all this seems irrelevant to me.
The claim is that because the brain performs calculations, then if those same calculations can be performed by a Turing machine, then a Turing machine can be functionally equivalent. But "performing calculations" does not fully describe what the brain does. It's essential function is real-time, and a comprehensive computer model of the brain has to be real-time as well.
Say we have a computer capturing data live and in real time from cameras, microphones, touch sensors, odor detectors. We have the computer formulate these into a realtime 3d model of it's own environment.

Now suppose that we take a dump of all the data that the computer captured and feed it to another computer, running the same algorithm and running the program only half as fast, naturally feeding it the data only half as fast.

The algorithm will still be identical.

Suppose we get a team of people to desk check part of the run, again using a dump of the data captured by the original run.

Again the algorithm will be exactly the same.

The time thing is a red herring.
 
Yes, and any real instantiations of Turing machines are capable of real time action. Your claim that an abstract turing machine cannot simulate a brain because there is no notion of time in the Turing machine model is misleading and wrong -- at worst, we would have to simulate an entire Universe, and if we are going to do that we may as well simulate all possible universes, which is easier anyways. Please skip to the more interesting arguments.

The concept of modelling doesn't seem to have taken hold. I'll say it again - if the behaviour of a brain includes significant functional elements not included in the Turing model, then that model will not accurately predict the behaviour of the brain. It's quite clear that the brain carries out real-time operations. It's also clear that the Turing model doesn't deal with real-time operations.

The sensible approach is to provide a model which does at least manage to cover the things which we know that the brain does, and is not actually known to be wrong.

Anyone who has written simulations of real-time systems, and written actual real-time control and monitoring systems, whether it's a chemical works or a reservoir, knows that they are different things. I find it bizarre that the mere possibility that the things that we know are part of the essential functioning of a working brain might actually be part of the working model of consciousness is to be ruled out in advance. I can't think of any sensible reason for this, except the history of AI research on large batch-mode mainframes, while real-time systems were constructed by engineers on different computers somewhere else. But really, since the history of the real-time interrupt goes back over fifty years, there's no excuse.
 
The concept of modelling doesn't seem to have taken hold. I'll say it again - if the behaviour of a brain includes significant functional elements not included in the Turing model, then that model will not accurately predict the behaviour of the brain. It's quite clear that the brain carries out real-time operations. It's also clear that the Turing model doesn't deal with real-time operations.
Sorry, wrong. Time to a Turing machine is simply data. You just put the time data on the tape along with everything else.

Anyone who has written simulations of real-time systems, and written actual real-time control and monitoring systems, whether it's a chemical works or a reservoir, knows that they are different things.
I spend about 60 hours a week doing that. You're wrong.
 
A brain interacts with what? Data. A TM interacts with data.

Many brains don't respond within certain limits. Mine for example - hopelessly slow processing time - my kids have unfortunately inherited this from me.

I don't think, however, that you forget to breathe.

But all this seems irrelevant to me.

Say we have a computer capturing data live and in real time from cameras, microphones, touch sensors, odor detectors. We have the computer formulate these into a realtime 3d model of it's own environment.

Now suppose that we take a dump of all the data that the computer captured and feed it to another computer, running the same algorithm and running the program only half as fast, naturally feeding it the data only half as fast.

The algorithm will still be identical.

Suppose we get a team of people to desk check part of the run, again using a dump of the data captured by the original run.

Again the algorithm will be exactly the same.

The time thing is a red herring.

Yes, it's quite straightforward to take a program designed for real time data acquisition and control, grab the data going in and out, and run a simulated version of the program. What isn't possible is to take a simulation program and just plug it into the real world. It has to be rewritten to add in the real time facility that isn't present in a simulator.

I know this because I've had to do both things.
 
Robin said:
robin said:
For all concerned - I would be interested in your opinion.

Suppose that there was a sufficiently detailed computer model of a human brain, and say this brain is given realistic sense data, including modelling of the sense data associated with body control and feed back.


Perhaps it starts as a model of an embryo and models the brain development up to birth and then childhood even adulthood - obviously this would take vast computing power.

But suppose that could be done - do you think it possible that the virtual human being modelled would exhibit human like behaviour?

For my own part I cannot see any reason why it would not exhibit human like behaviour.
Well, you know my answer - I consider it impossible that it wouldn't. :)
That is three saying it would, nobody yet saying it wouldn't.

How about the others here?

I don't have a firm opinion one way or the other. When people talk about a model 'behaving', it's generally referring information outputs that are displayed for our edification. It seems to me that any computer simulation's behavior is nearly as dependent on how it is interpreted by humans as it is the actual output.

I guess I don't understand how you define 'human like behavior' for a computer model. Could you explain what you mean? How is the model's 'behavior' discerned and interpreted by humans? Does it interact with us directly, or only with simulated other beings in it's matrix like existance?
 
Status
Not open for further replies.

Back
Top Bottom