• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

But self-awareness is not the problem.
And I would say that self-awareness is precisely the problem.

If I build a robot with cameras and touch sensors and program it to build a model of it's environment by which it can navigate, then by definition it is self-aware because it could not navigate its environment if it did not also model itself.
No. To navigate, it merely has to be aware.

But what I am getting at the the "what it is like" component of consciousness.
That is precisely self-awareness.

I might program my robot to classify sensory data according to state A, things that are helpful to survival and state B, things that are unhelpful to survival.
Which does not require self-awareness.

But as complex as the robot might become I could not think of states A and B ever being more than data - I could not think of them as being pleasure and pain.
Why not? That's all that pleasure and pain are, after all.
 
As usual, my arguments have been misrepresented.
No they haven't. You're just wrong.

I've claiming that there is insufficient evidence that justifies the claim that the brain, or components of the brain, could be replaced with some form of computer equipment.
Yes, we know what you're claiming. You're wrong.

If the brain is Turing-equivalent, we can replace it with any other Turing-equivalent device.

I don't argue that the brain is Turing-equivalent. I don't claim that it's more powerful than a Turing machine, though, just that it's unreliable.

However, the brain is a physical system, and physical systems can be simulated with arbitrary precision on a Turing machine. So we can still replace it with any Turing machine.

This was all proven decades ago.

In this case, I think that the onus is on the people making such a positive claim to justify it with hard evidence that all the functionality of the brain can be duplicated by computers.
There is no physical function that can't be simulated by computers.

You're going to bring up the a-simulated-orange-tree-doesn't-give-me-oranges objection. Don't bother. You've already raised that objection a dozen times. It's a category error, and a particularly absurd one.

As an aside, it's noteworthy that the brain doesn't, in fact, function like a computer.
As an aside, you're wrong.

That's merely another problem to be overcome should an effort be made to firstly, define all the functions of the brain, and secondly to demonstrate how computers could perform all those functions.
No. In fact, we don't need to do any of that. We've already proven that a computer can perform all the functions of the brain, no matter what those functions might be.
 
I've noticed this tendency in the consciousness debates to put forward entirely hypothetical arguments as if they'd actually happened. We haven't actually managed to replace any part of the nervous system so far - even the part whose function is (as far as we know) fully understood.

Irrelevant. You keep saying that there are fundamental differences between the two, basically _because_ there are fundamental differences. How is that not circular ?

As usual, my arguments have been misrepresented.

Minority arguers in this forum usually claim this.

I've claiming that there is insufficient evidence that justifies the claim that the brain, or components of the brain, could be replaced with some form of computer equipment. In this case, I think that the onus is on the people making such a positive claim to justify it with hard evidence that all the functionality of the brain can be duplicated by computers.

I understand this. However each time someone mentions evidence to that effect you shift the burden of proof to exclude said evidence.
 
And I would say that self-awareness is precisely the problem.
But self-awareness is trivial.
No. To navigate, it merely has to be aware.
Could a robotic car navigate a virtual environment if it only modelled it's environment and not itself? How would it know where it was?

Could a self-parking car park itself if it only modelled the kerb and the cars on either side of the space, but did not model itself? How?
That is precisely self-awareness.
Not at all. There could be someone in a state of complete disassociation with no concept of self but they could still feel pain. They would just not know that the pain pertained to them.
Why not? That's all that pleasure and pain are, after all.
That is a claim, not a fact.
 
No. In fact, we don't need to do any of that. We've already proven that a computer can perform all the functions of the brain, no matter what those functions might be.
I think I would prefer to wait for proof of the pudding.

When I see the first simulation of a mouse's brain function on a computer that provides mouse-like behaviour I would begin to be more confident of this.
 
But self-awareness is trivial.
Simple, yes, but not trivial. Self-awareness brings about a revolution in the classes of behaviours a system can exhibit.

Could a robotic car navigate a virtual environment if it only modelled it's environment and not itself? How would it know where it was?

Could a self-parking car park itself if it only modelled the kerb and the cars on either side of the space, but did not model itself? How?
I'm not sure what you mean by self-aware, and by "modelling itself", if you're asking these questions.

The car could simply move in one direction until it stopped, then try another direction. That certainly doesn't require the car to be self-aware; though it does require awareness.

All it needs is a goal of moving this object past these other objects. It doesn't need to associate any of the objects with itself, or even have the concept of self, or the ability to associate.

Not at all. There could be someone in a state of complete disassociation with no concept of self but they could still feel pain. They would just not know that the pain pertained to them.
Then they don't have a what-it-is-like. They are not a be-able thing. You need self-awareness for that. Some level of self-awareness, not necessarily that of a human, but something that is referring back to self.
 
I have just a little problem with all of this.

People say they have no problem at all with the idea that a program being desk checked on pencil and paper over a billion years could result in a brief conscious moment just as we experience it - a note from Bird's saxaphone, the taste of a peach or some such.

Not just that they think it possible, but that they don't even see how the idea might be problematical.

That this instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper.

Well I have got to wonder if they are really serious, or just maintaining a debating point.

Can the taste of a peach really result from numbers being written down on paper with a pencil?
 
I'm not sure what you mean by self-aware, and by "modelling itself", if you're asking these questions.
Self aware. Aware of self.

Modelling itself. Well I don't know how I can put that better.

You are aware of how a computerised car might model it's environment using sensors to feed data into a 3d + t mathematical model? How it might model a tree to avoid crashing into it? How it might model another car on the road to avoid crashing into it?

Well it would model itself in just the same way.
The car could simply move in one direction until it stopped then try another direction. That certainly doesn't require the car to be self-aware; though it does require awareness.
You mean until it crashed into the car in front and the car behind. Well yes, I suppose it could do that. But to get any kind of intelligent self-parking it would have to model itself.
All it needs is a goal of moving this object past these other objects.
And that is all we have.
It doesn't need to associate any of the objects with itself, or even have the concept of self, or the ability to associate.
We are not talking about concept of self, the expression under discussion is "self aware". If a robotic car can detect and model the position, size and speed of another car we would say it was aware of that car, wouldn't we? If a robotic car could detect and model the position and speed of itself then it would be self aware.

And I don't know how a robotic car could avoid associating. It has to know that the brake and accelerator functions pertain to the object representing itself, and not the car in front.
Then they don't have a what-it-is-like.
Can you justify that statement?
They are not a be-able thing You need self-awareness for that. Some level of self-awareness, not necessarily that of a human, but something that is referring back to self.
So a person with no concept of self cannot feel pain? I think you are wrong there. Certainly you have to justify the position.
 
Last edited:
If you want to argue that we can't mathematically model the Universe, you have the entire history of, well, history to indicate that we can.
But we have not mathematically modelled a working brain and observed animal like behaviour in the model yet, as far as I know.

So again, I will wait until we have before I decide that we can.
 
Last edited:
I understand this. However each time someone mentions evidence to that effect you shift the burden of proof to exclude said evidence.

Well, I'd prefer that if instead of mentioning evidence, they actually quoted it, or showed a reference to it. Most of the time it's in the form of "people smarter than you think this, so why don't you?"
 
Easily done. Artificial neural networks are Turing-equivalent; Turing machines are neural-network equivalent. That's a mathematical theorem, which is about as hard evidence as it gets.

Since artificial neural networks mimic a subset of human neural function, it's therefore provable that human neural function is also Turing equivalent.

It's provable that the component of human mental activity that is equivalent to a neural network of computers is equivalent to a neural network of computers.

All you have to do is demonstrate that this is the entire functionality of the brain and you're done.

Since there's no known or hypothesized capacity that is greater than Turing equivalent,

This is meaningless. Capacity of what?

Again, the assumption is that the brain works like a Turing machine, and hence it can't be

there's no known or hypothesized way in which Turing machines can be less powerful than human brains.

Another meaningless word. "Powerful". Of course a given Turing machine can calculate the square root of 99.5 as quickly as a human brain. Can it want something, though?

Which puts the ball back in your court. If you are hypothesizing that there is something -- anything -- that humans can do that Turing machines cannot, you should be able at a minimum to describe what that something is, since it's your hypothesis.

A human being can want something.

And that's more or less what Lucas-Penrose tried, using the Godelian argument. Unfortunately, they demonstrably got it wrong in several regards. Which means we're back to the "there's no known or hypothesized way" in which human brains can be more powerful.

Game-Set-Match.

If you are going to dismiss some very subtle and complex arguments, it would be better not to do so using undefined and irrelevant terms such as "powerful".
 
Self aware. Aware of self.

Modelling itself. Well I don't know how I can put that better.

Perhaps a better example would be inside the computer - for any system to run it needs a fairly extensive model of itself and its resources to manage the processes it's running, see they each get a slice of the processor, that memory is managed etc. However I'd be careful that you define the "self" in "self-aware" carefully at that point.
 
Isn't comprehend just another word for understand?
Sure. And what enables one to comprehend?

Can I not be conscious of something without understanding it?
I don't understand the point of your question. If you are not conscious you won't understand anything.

Do you mean is it possible to be conscious of something and not understand it? What do you mean by "understand"? Is it possible that you understand that it exists?

The only relevant ones being the ones actually written by Searle
It's sourced. By your logic text books can't be trusted.

Yes, that is his overall aim, the particular point he is making in the Chinese room argument is, as far as I remember, about meaning.
The overall aim is all that is relevant to what I'm saying.

By the way, don't treat SEP as gospel, I have often pointed out places where it is egregiously inaccurate.
A.) It's sourced. B.) there are millions of links (many that are sourced) that make the point.
 
Last edited:
I have just a little problem with all of this.

People say they have no problem at all with the idea that a program being desk checked on pencil and paper over a billion years could result in a brief conscious moment just as we experience it - a note from Bird's saxaphone, the taste of a peach or some such.

Not just that they think it possible, but that they don't even see how the idea might be problematical.

That this instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper.

Well I have got to wonder if they are really serious, or just maintaining a debating point.

Can the taste of a peach really result from numbers being written down on paper with a pencil?

Yes.

But it gets even more bizarre than that.

Given infinite time, and a large enough system, the system may transiently become isomorphic to the pattern of information flow that is your current consciousness. As long as the isomorphism exists, that system will think it is you.

So it is possible that the instant I am (or you are) experiencing right now is nothing more than fleeting states of some random system in some random universe.

But let us talk about this idea being "problematical." Yes, on the one hand, it is extremely counterintuitive -- it doesn't "feel" right. But on the other hand, it uncovers the possibility that immortality may actually be a tenable idea. Because if "I" am merely information processing, then regardless of the instance of "I" that is currently ... instanced ... it is always the same "me." This means not only that I should be able to transfer myself to any suitable medium but also that I should be able to pause myself for an arbitrarily long duration and resume without knowing the difference. You know all those people that froze themselves when they died? Guess what -- if we are right (the strong AI supporters, that is) then as long as those corpses' neural connections are still measurable, we will be able to fully reconstruct their consciousness (minus whatever short term memories never had a chance to physically realize, of course) once the technology is available. And that is just the tip of the iceberg.

In other words, the world will be an exponentially more exciting place if we are right. I would gladly accept a higher probability of solipsism being true if it means I might be able to exist free of this body.
 
Last edited:
We think it can and we are attempting to falsify the hypothesis. You are simply saying no.

Pixy, for example, has no doubts on the matter. He's not putting forward a hypothesis, he's saying that the problem is solved. Rocketdodger doesn't seem to have many doubts. Drkitten has just posted what he considers to be absolute proof that the brain has to operate like a Turing machine.

If you agree that the hypothesis remains unproven, you would seem to be in closer agreement with me than with them.

This is a broad statement. As an aside, in many ways, the braind DOES, in fact, function like a computer.

And in many ways it doesn't.
 
Pixy, for example, has no doubts on the matter. He's not putting forward a hypothesis, he's saying that the problem is solved. Rocketdodger doesn't seem to have many doubts. Drkitten has just posted what he considers to be absolute proof that the brain has to operate like a Turing machine.

If you agree that the hypothesis remains unproven, you would seem to be in closer agreement with me than with them.
The weight of the evidence is for the proposition IMO. I'm happy to concede that we are not there yet. We might not get there for 100 years (I seriously doubt that). But it would be silly of me to claim to be strictly agnostic in the face of the evidence. Sure, it's possible that there is something significant that we are missing in our understanding. I'll grant that. The way I look at it is that we have people researching both the supernatural and AI. No significant advancements into supernatural have ever been made. We make advancements in the field of AI every day. The experts in the field are advancing. We've been here before. The history of science is simply closing the gaps of our understanding. To think that this one is insurmountable strikes me as a bit hopeful and not realistic. Which is why I'm no longer a dualist. If you want to color me agnostic then that's fine but color me an agnostic that fully expects that the problem will be solved.

And in many ways it doesn't.
None that would preclude a computational model of consciousness.
 
But self-awareness is trivial.

Could a robotic car navigate a virtual environment if it only modelled it's environment and not itself? How would it know where it was?

In any case, there's no "self-awareness" involved. There's a computer and there's a car. The computer controls the car. The car is no more self-aware than it is when it's controlled by a human being.

We may choose to describe the car/computer combination as a single self, but it doesn't imply that there's any real self-awareness involved. The computer has no model of itself, and neither does the car, no matter how sophisticated the system.

If we consider the computerised car as self-aware, then we should consider the car/human combination as a single entity. That has a far better claim on self-awareness, since the human is aware of both himself and the car.

Self-awareness is one of the many terms used in this context which just evaporate into nothingness when looked at closely.

Could a self-parking car park itself if it only modelled the kerb and the cars on either side of the space, but did not model itself? How?

Not at all. There could be someone in a state of complete disassociation with no concept of self but they could still feel pain. They would just not know that the pain pertained to them.

That is a claim, not a fact.
 
It's provable that the component of human mental activity that is equivalent to a neural network of computers is equivalent to a neural network of computers.

All you have to do is demonstrate that this is the entire functionality of the brain and you're done.

Done already. We have no defined or hypothesized concept of a reasoning capacity greater than a Turing machine.


This is meaningless. Capacity of what?

Information processing and problem solving.


If you are going to dismiss some very subtle and complex arguments, it would be better not to do so using undefined and irrelevant terms such as "powerful".

Good thing I didn't then. I used a well-defined and relevant term of art, such as "powerful." If you don't understand the mathematics of Turing completeness, perhaps you shouldn't be attempting "subtle and complex arguments" that by your own admission you don't understand.
 

Back
Top Bottom