• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
This is what Hofstadter and Ramachandran and Dennett are talking about - consciousness is the self-referential process from the point of view of the process itself.

I can't speak for the others because I am not as familiar with their work, but Dennett is most certainly not saying that SRIPs = consciousness. He explicitly says that neurons, viruses, bacteria or any of the "robots" that animals are composed of are not conscious.
 
Precisely. They don't recognize consciousness as a behavior
Of course it's a behaviour. What else could it be?

so they believe it can be pulled off without a mechanism to execute it at the end of the algorithm
There is nothing to be pulled off, no additional mechanism required, nothing more to execute, and no end of the algorithm.

Conscious is not the output of a program. It's the ongoing self-referential operation of the program.

It's a verb, not a noun. I remember someone saying something similar before. Who might that have been?

(which I would contend is an abstraction of a physical process anyway in itself)
An algorithm is an abstraction of a physical process, but that's irrelevant, because an algorithm doesn't do anything until you instantiate it in a physical system.

just as we need mechanisms to make us blink, shiver, focus light on the retina, and so forth.
How is that relevant?
 
I can't speak for the others because I am not as familiar with their work, but Dennett is most certainly not saying that SRIPs = consciousness. He explicitly says that neurons, viruses, bacteria or any of the "robots" that animals are composed of are not conscious.
Because they don't perform self-referential information processing.

Dennett makes this explicit with his example of the thermostat, which is aware but not self-aware.
 
If you're at all interested in self-reference, you owe it to yourself to read Godel, Escher, Bach. It's a masterful explanation of the subject and wonderfully entertaining at the same time.
 
No, that's not self-reference, that's merely reference.


All of this is correct except the suggestion that the Roomba is engaging in self-reference. It's not. If you read the Wikipedia article on reflection it will give you a good idea of how self-reference is used in conventional computing.

I really don't see how the difference is significant, but let's say it is. Is it your contention then that if a Roomba modified some internal lines of code to modify its behavior when it encountered an object, instead of just following already existing if/then instructions, it would suddenly become aware of itself?
 
If you're at all interested in self-reference, you owe it to yourself to read Godel, Escher, Bach. It's a masterful explanation of the subject and wonderfully entertaining at the same time.

Will do. Might I make a suggestion as well? Listen to the Teaching Company lectures on philosophy of mind by Searle (which you can find on teh internets). I don't agree with him, but I do think he is one of the best at presenting the case against strong AI, and I do think it is a good exercise to get exposed to the best counterarguments to one's position.
 
He's offering an explanation for self-awareness. It may not be well evidenced yet and it may be wrong, but it is still an attempt at an explanation. When I posted the link originally, I said it was an interesting "hypothesis," which it clearly is. I certainly don't think it is the final answer for self-awareness, not yet at least.

I think that's so. I don't think he's making any more claims for his idea than it merits.
 
The first claim is incontrovertible. Unless, as someone tried to do earlier in the thread, you define computation as something only a human can do.

Or you think that "computation" can mean more than one thing.

The second claim can't be right, for the trivial fact that the brain is a physical organ that does things like disperse and uptake neurotransmitters--which an algorithm doesn't do.

Which is a very important point. The example I've given before is that your brain can do things like catch a ball, which a Turing machine can't do. Of course, a Turing machine can simulate movement of the ball as well as the operations of the brain, but it cannot actually control the arm.

One of the assumptions of the computational model is that this just doesn't matter.

What Pixy is claiming, I think, is that if we know the exact functions of neurotransmitter dispersal and re-uptake on the IP the brain does, then we can model those functions in an algorithm, such that the effects on the algorithm's IP are identical to what the brain does.

Essentially, it might be more clear to say that the algorithm can do anything that the mind can do--with the understanding that the mind is the algorithm running on the hardware of the brain.

And since the functions of the brain are realtime, they are not necessarily modelled by the time independent Turing machine. It's possible to model the time dependent interaction of the brain with its environment, but not to emulate it.

I've speculated that the fixation on Turing machines, which clearly can't do the job the brain does, is because when AI was taking off, computers largely operated in batch mode, and the Turing model was what they adhered to. Now, when everybody's PC has to be able to show people walking into walls on YouTube, that wouldn't be the immediate assumption.

Sorry to everybody who's been through this before.

I'm sure Pixy will correct me if I'm wrong. :D

He'll at least say

Hypothetical Pixy said:
 
It's chiefly used in technical aspects of quantum mechanics, AFAIAA.

So how does it apply to what anything does, like phone transmissions and other information transfers?

I mean really a phone throws away all that stuff too, why are singling out a brain for this bizarre derail? Your TV signal does not care either, or the internet.
 
Infinite, that is going to be a lot or marbles and tubes, think of the economic benefits!

You should have left it as conomic ebenfits! It has a very pleasant ring to it. I had originally thought it was some sort of inside joke I was missing. :D
 
So how does it apply to what anything does, like phone transmissions and other information transfers?

I mean really a phone throws away all that stuff too, why are singling out a brain for this bizarre derail? Your TV signal does not care either, or the internet.

A phone doesn't throw anything away. Nor does your TV. What happens is that the engineer who designs the machine arranges it so that out of the vast mass of information passing around, some of it can be directed from one human being to another. Nobody thinks that the sound waves coming out of the earpiece are the only pieces of information actually being exchanged.

Nor would anyone surmise that the telephone is concerned with the content of the conversations passing between the people using it, any more than the collisions of air molecules against the cord. They are all of equal importance to the telephone.

The abstraction of the physical activity which allows the conversation to take place is an engineering matter. It's a way to make the machine useful - suppressing the interaction with the various physical effects just enough to allow information to be transferred from one human to another.

For some reason, the computational view claims that the precise physical interactions which allow consciousness to be created are the same ones the engineers use to implement the designs of the programmers. So only the operations that are useful to human beings have the side effect of making the machine self-aware.
 
robin said:
Note that my wording was "each state's measurement can have a precise symbolic representation"

A hurricane is an example of a physical process where each state's measurement cannot have a precise symbolic representation.
I've seen weather charts, and different states of hurricanes have been represented symbolically. Properties of hurricanes can be measured.
I drew your attention to the exact wording before - why are you ignoring it?

Those weather charts are not precise representations of the state of a hurricane.
 
For all concerned - I would be interested in your opinion.

Suppose that there was a sufficiently detailed computer model of a human brain, and say this brain is given realistic sense data, including modelling of the sense data associated with body control and feed back.


Perhaps it starts as a model of an embryo and models the brain development up to birth and then childhood even adulthood - obviously this would take vast computing power.

But suppose that could be done - do you think it possible that the virtual human being modelled would exhibit human like behaviour?

For my own part I cannot see any reason why it would not exhibit human like behaviour.
 
Last edited:
I drew your attention to the exact wording before - why are you ignoring it?

Those weather charts are not precise representations of the state of a hurricane.

Ah, I see - it's precise representations you mean. So is precision an absolute, or a matter of degree, different in every case?
 
Or you think that "computation" can mean more than one thing.

I find that your definitions of words tend to be much too liberal. For instance, your definition of "information processing" which includes entropic processes. It's like saying a toilet bowl is a "food processor". Sure, "food" undergoes a "process" in a toilet bowl, but I don't think Cuisinart will be branching out anytime soon.

Which is a very important point. The example I've given before is that your brain can do things like catch a ball, which a Turing machine can't do.

Can we at least *try* to be more precise? Neither a brain nor a Turing machine can catch a ball.

Of course, a Turing machine can simulate movement of the ball as well as the operations of the brain, but it cannot actually control the arm.

Both the brain and the hardware which instantiates the Turing machine can control some physical system (a body) which can catch a ball.

One of the assumptions of the computational model is that this just doesn't matter.

Wrong. See above.

And since the functions of the brain are realtime, they are not necessarily modelled by the time independent Turing machine. It's possible to model the time dependent interaction of the brain with its environment, but not to emulate it.

The Turing machine is only time-independent when it is not instantiated in a body. When it's instantiated in a body, its inputs and outputs happen in real time, and so the whole system becomes time-dependent.

I've speculated that the fixation on Turing machines, which clearly can't do the job the brain does, is because when AI was taking off, computers largely operated in batch mode, and the Turing model was what they adhered to. Now, when everybody's PC has to be able to show people walking into walls on YouTube, that wouldn't be the immediate assumption.

I don't follow. Turing machines were not a fad. They are still central (though maybe not referenced by name) to the whole software industry. But humor me and tell me which model "they" adhere to now, if not to the Turing model?
 
Ah, I see - it's precise representations you mean.
Yes, I said so in the first place and then drew attention to the wording when you appeared to have missed it.

So I think the sarcastic "Ah, I see" is completely uncalled for since the fact is that you failed to read what I said. Twice.

If you had read my definition in the first place you could have gone straight in to asking what I mean by precise instead of wasting both of our time.
So is precision an absolute, or a matter of degree, different in every case?
By precise I mean exact. Any set of independent measurements (correctly done) of the same thing will return exactly the same symbolic representation value (edit).

This is the same in every case.

For example if the measurement is whether the voltage across silicone junctions is below or above certain threshholds then there is an exact symbolic representation. Not only that, the symbolic measurement of this information uniquely implies the next state of the thing being measured, as long as the method of measurement is kept the same.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom