• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Name one thing in reality that can not ever be sufficiently simulated (or emulated or whatever) in a computer or robot to provide the inputs necessary for consciousness to emerge.

"Consciousness to emerge". That doesn't sound particularly rigorous to me.

If you can't name anything, then YOU are the one invoking magic. Not me.


Could you perhaps precisely describe what is scientifically necessary for "consciousness to emerge". Just what "inputs" are needed?

A bunch of vague waffle, a demand to disprove it and the obligatory reference to magic.

Making an argument that a "simulation is not the real thing" is irrelevant. An artificial heart is not the real thing, either, and yet it can pump blood as effectively (or even more efficiently) than the real thing.

We know this because we fully understand the functionality of the heart. We can precisely define what it does, and what we want it to do. Apart from anything else, we can replace the heart (albeit temporarily) with a simple pumping system and see that it works.

In the case of consciousness, the claim appears to be that because the term is so vague, and the functionality so much more complex, that it's possible to make equally vague assertions and demand that someone disprove them.

Of course they would say it is inconclusive. What they are NOT doing is offering principles that would make it impossible.

Maybe you should study what science already knows about consciousness. We don't know everything, yet. But, what we do know seems contrary to the notion that consciousness can't ever be simulated or emulated in a machine.

"Machine" is good. "Machine" encompasses everything.

Another frequent feature of this discussion is to make very precise and specific claims - relating to algorithms, Church-Turing and computation - and then rephrase it in a totally open way. Because if someone denies that a machine might duplicate the functionality of the brain - well, that's just a claim of mysticism and magic and god.

Obviously if consciousness is associated with particular physical processes, then a machine that duplicated those processes would produce the same effect. The computation claim is that no particular physical processes are essential to consciousness, as opposed to every other process going on in the human body.

I suppose it depends on what points one is trying to make.
 
Last edited:
I am just wondering something. Sorry to interrupt. But is there something about your belief system that leads you to think that pumping blood is all there is to an effective, efficient heart?
There is no evidence that the heart is a "second brain" or has "cellular memory" as the story implies. That would actually harken back to concepts of essentialism or something, which modern biological discoveries tend to contradict.

BUT, there is no doubt a heart transplant can make an impact on behavior: It is, afterall, a major physiological change to go through. Most of the examples your link cited can probably be explained by more down-to-earth factors, but getting into them would be a derail. If you start a new thread on this, I am sure the discussion that follows would bring them out.


However, for the sake of argument, let us assume the paper is correct: that the heart does more for our mind than merely pumping blood. I would then say: Such functions can ALSO be simulated by a machine: Either circuitry added to the artificial heart... or by a separate machine stuffed somewhere else in the anatomy. It makes no difference to my fundamental argument.
 
As a database programmer, I don't get why this seems so hard to believe. Perhaps someone can tell me what I'm missing. If one takes up the task of actually trying to build a conscious robot, the way becomes quickly clear.

First you have to come up with a general definition of consciousness that you will be trying to program. This has to be just a list of some features that are generally considered to be a part of consciousness to get started on this.

conscious - a. Having an awareness of one's environment and one's own existence, sensations, and thoughts. http://www.thefreedictionary.com/conscious

This seems quite sufficient to get started. We know the robot will need to be:
1. aware of its environment
2. aware of its own existence
3. aware of its own sensations
4. aware of its own thoughts

With today's technology nothing on that list is particularly hard to do. I described my technique in an earlier post.

http://www.internationalskeptics.co...4694&highlight=self-consciousness#post8034694

What am I missing?
 
So in the last dozen or so pages, has there been anything but a circular argument from incredulity? That is, "consciousness can't be simulated because I figure there'll be something that's vital to consciousness which can't be simulated?"

I don't know of anyone except traditional pro-soul believers who claim that there can never be a conscious machine.

In other threads, Westprog for instance has objected to the assertion that it is "in theory possible to build a conscious machine" because we have no such theory yet -- we may discover that there is some impediment after all -- and so we must restrict ourselves to saying that there's no impediment currently known.

Obviously, one physical object manages to produce consciousness (the brain) so it's a bit of a stretch to contend that no other object could ever be designed and built which also produces consciousness, absent some clear physical reason, which would have to rest upon a clear understanding of how consciousness is done, which we don't have.

So again, there are no currently-known impediments to building conscious machine, but also no currently-known way of doing it either.

Where we run into problems are assertions (which have been made in this forum) that consciousness can be "programmed" -- an assertion which would also require a clear and complete explanation of the phenomenon, which we do not have (ignoring the significant problems of identifying a true analog to "programming" in the brain) -- and assertions that creating a computer simulation of the brain (NOT a scale-model or functional model, which is different) would somehow cause a real instance of conscious awareness in the physical world, despite the fact that no such thing happens in any other sort of digital simulation of any other system (e.g. computers modeling the weather are never subject to any actual windy conditions no matter how hard the simulated gale might blow) and despite the fact that the scientists actually working on such a project firmly deny that it will result in a conscious machine.

Some folks making these assertions have argued for what seems like an informational frame of reference, a special "world of the simulation" which has some sort of actual claim to objective existence, but these assertions are, of course, hopelessly incoherent because the "world of the simulation" exists only in the mind of the perceiver of the output of the simulation and nowhere in objective physical reality. In short, it's an imaginary world.

Functional models are another matter, of course. A model car can still run over your foot.

As someone noted above, ensuring we're all using the same vocabulary for computer simulations v. working models is key to keeping the conversation from blowing up into merely apparent disagreements that aren't substantive.

I've been wondering if perhaps we could make things clearer by referring to "representations" (which would include digital simulations run on computers) and "instances" (which would include any hypothetical conscious machine).

But in any case, the request of the thread title cannot be met, because there are no non-laymen who understand it yet either.
 
I would then say: Such functions can ALSO be simulated by a machine: Either circuitry added to the artificial heart... or by a separate machine stuffed somewhere else in the anatomy.


Thanks. One last question. Is this the kind of thing that Popper would call "promissory materialism"?
 
As a database programmer, I don't get why this seems so hard to believe. Perhaps someone can tell me what I'm missing. If one takes up the task of actually trying to build a conscious robot, the way becomes quickly clear.

First you have to come up with a general definition of consciousness that you will be trying to program. This has to be just a list of some features that are generally considered to be a part of consciousness to get started on this.

conscious - a. Having an awareness of one's environment and one's own existence, sensations, and thoughts. http://www.thefreedictionary.com/conscious

This seems quite sufficient to get started. We know the robot will need to be:
1. aware of its environment
2. aware of its own existence
3. aware of its own sensations
4. aware of its own thoughts

With today's technology nothing on that list is particularly hard to do. I described my technique in an earlier post.

http://www.internationalskeptics.co...4694&highlight=self-consciousness#post8034694

What am I missing?

Considering that no one knows how the brain does any of these things, how are you going to replicate them?
 
Obviously if consciousness is associated with particular physical processes, then a machine that duplicated those processes would produce the same effect. The computation claim is that no particular physical processes are essential to consciousness, as opposed to every other process going on in the human body.

Bingo. Many self-described computationalists on this forum are guilty of that error, even if they're not in the overtly "it just happens" faction.

If you use Wolfram's definition of computation, then every organ in your body is performing computations.

I've never been able to tease a definition of the term from any computationalists here which actually ends up putting the brain in a different category from every other bit of physical matter in the universe.
 
I have glimsped the articles, but not read them in their entirety, yet.

Does the first one offer any examples of principles that would render the discovery of how consciousness works impossible? Or that it would be impossible for a computer to do it, eventually?

The neuroscientists quoted don't claim that it is impossible that a computer will emulate the brain. They also don't say that it is possible. They say that the functionality of the brain is not well enough understood to make a definite assertion.

This is precisely what I am saying. No more, no less. If what I am saying equates to magic, then so does what they are saying.

Does the second one actually list anything that can NOT be computed by a Turing Machine, but can yet be computed by another machine?

If the answers to any of these are "yes", I would like them to be pointed out.


I strongly recommend that if you are interested in this subject, and in particular, the connection between the Church-Turing Thesis and artificial intelligence, that you read the article, or at least the section on misunderstandings. It shows precisely how much of the writing on this subject is not soundly based, and that the people using Church-Turing don't necessarily understand exactly what it is saying.

It does not apply to the general case of whether a brain can be replaced by some kind of unspecified artificial system, or even some form of digital system. It does apply to the specific claim that it is possible to run a program on a computer system that will have the exact experience of a human being.

Stanford said:
The Church-Turing thesis does not entail that the brain (or the mind, or consciousness) can be modelled by a Turing machine program, not even in conjunction with the belief that the brain (or mind, etc.) is scientifically explicable, or exhibits a systematic pattern of responses to the environment, or is ‘rule-governed’ (etc.).

Stanford said:
any device or organ whose mathematical description involves functions that are not effectively calculable cannot be so simulated. As Turing showed, there are uncountably many such functions.

That seems fairly clear to me - though when I've previously posted this reference Pixy Misa insisted that it meant the exact opposite.
 
Bingo. Many self-described computationalists on this forum are guilty of that error, even if they're not in the overtly "it just happens" faction.

If you use Wolfram's definition of computation, then every organ in your body is performing computations.

I've never been able to tease a definition of the term from any computationalists here which actually ends up putting the brain in a different category from every other bit of physical matter in the universe.

I've regularly asked for such a physical definition, and the response has usually been along the lines of "the computer is performing physical processes".

"Information processing" is another term that's left as undefined as possible.
 
There is no evidence that the heart is a "second brain" or has "cellular memory" as the story implies. That would actually harken back to concepts of essentialism or something, which modern biological discoveries tend to contradict.

BUT, there is no doubt a heart transplant can make an impact on behavior: It is, afterall, a major physiological change to go through. Most of the examples your link cited can probably be explained by more down-to-earth factors, but getting into them would be a derail. If you start a new thread on this, I am sure the discussion that follows would bring them out.


However, for the sake of argument, let us assume the paper is correct: that the heart does more for our mind than merely pumping blood. I would then say: Such functions can ALSO be simulated by a machine: Either circuitry added to the artificial heart... or by a separate machine stuffed somewhere else in the anatomy. It makes no difference to my fundamental argument.

It should be pointed out that essential functionality is something which we choose. If we want to replicate all the functionality of the heart, then we need an exact duplicate of the heart. If we decide that all we want is a pump, then we can replicate that and leave other considerations on one side.

It's an error to suppose that there is an objective definition of essential functionality.
 
I don't know of anyone except traditional pro-soul believers who claim that there can never be a conscious machine.

In other threads, Westprog for instance has objected to the assertion that it is "in theory possible to build a conscious machine" because we have no such theory yet -- we may discover that there is some impediment after all -- and so we must restrict ourselves to saying that there's no impediment currently known.

Obviously, one physical object manages to produce consciousness (the brain) so it's a bit of a stretch to contend that no other object could ever be designed and built which also produces consciousness, absent some clear physical reason, which would have to rest upon a clear understanding of how consciousness is done, which we don't have.

So again, there are no currently-known impediments to building conscious machine, but also no currently-known way of doing it either.

Where we run into problems are assertions (which have been made in this forum) that consciousness can be "programmed" -- an assertion which would also require a clear and complete explanation of the phenomenon, which we do not have (ignoring the significant problems of identifying a true analog to "programming" in the brain) -- and assertions that creating a computer simulation of the brain (NOT a scale-model or functional model, which is different) would somehow cause a real instance of conscious awareness in the physical world, despite the fact that no such thing happens in any other sort of digital simulation of any other system (e.g. computers modeling the weather are never subject to any actual windy conditions no matter how hard the simulated gale might blow) and despite the fact that the scientists actually working on such a project firmly deny that it will result in a conscious machine.

Some folks making these assertions have argued for what seems like an informational frame of reference, a special "world of the simulation" which has some sort of actual claim to objective existence, but these assertions are, of course, hopelessly incoherent because the "world of the simulation" exists only in the mind of the perceiver of the output of the simulation and nowhere in objective physical reality. In short, it's an imaginary world.

Functional models are another matter, of course. A model car can still run over your foot.

As someone noted above, ensuring we're all using the same vocabulary for computer simulations v. working models is key to keeping the conversation from blowing up into merely apparent disagreements that aren't substantive.

I've been wondering if perhaps we could make things clearer by referring to "representations" (which would include digital simulations run on computers) and "instances" (which would include any hypothetical conscious machine).

But in any case, the request of the thread title cannot be met, because there are no non-laymen who understand it yet either.

You make some good points. I have been criticized because of the pro-soul position and people thinking if we make a conscious machine then that would prove that people are just dust - no spirit. I disagree that this would be the automatic conclusion. The essential difference between real life and artificial life, the defining characteristic unprogrammable is the experience of pain. That is what separates real life and spirit if you will with the artificial lifeforms such as conscious robots. Everything else I can program but that. Self-consciousness, emotions, will, thinking, creativity, perception can all be given to a conscious robot.

Now your point about mere simulations is a good one. That was the point of Searle's Chinese Room. It showed that syntactic squiggles that are the "thinking" in weak AI are just a simulation and the machine is not actually understanding anything at all. He used a simulation of the digestive system as a metaphor. It can never actually digest a pizza. So he does not consider this to be authentic artificial consciousness.

However, he told me personally that if a machine could semantically understand propositions and reason with them, then he would consider that machine to be truly conscious or strong AI. And that's what I figured out how to do. With this component the rest falls into place as described in this post where I describe my technique.
http://www.internationalskeptics.com/forums/showthread.p...ss#post8034694

I've just got this completed and have not shown it to anybody yet. I'm working on a deal to exhibit it in a demonstration in Niagara Falls. My sales pitch is:

"Tesla harnessed electricity in Niagara Falls and I have made it think."
 
Last edited:
Some folks making these assertions have argued for what seems like an informational frame of reference, a special "world of the simulation" which has some sort of actual claim to objective existence, but these assertions are, of course, hopelessly incoherent because the "world of the simulation" exists only in the mind of the perceiver of the output of the simulation and nowhere in objective physical reality. In short, it's an imaginary world.

One of the extraordinary things about this discussion is the insistence that the worlds of computer simulations are just as real as the world we live in. I assumed at first that this was hyperbole or just a way to consider them, but there seem to be people who believe this to be literally true, and that this is in fact a materialist viewpoint.
 
The essential difference between real life and artificial life, the defining characteristic unprogrammable is the experience of pain. That is what separates real life and spirit if you will with the artificial lifeforms such as conscious robots.

This is not accurate.

There are people who are alive and conscious who cannot feel pain. Their lives are difficult and they tend to die early.

If we ever build a conscious machine, I presume it will either feel pain or not, depending on how it's built.
 
This is not accurate.

There are people who are alive and conscious who cannot feel pain. Their lives are difficult and they tend to die early.

If we ever build a conscious machine, I presume it will either feel pain or not, depending on how it's built.

There are people who can't feel pain, in the physical sense, but they are still capable of suffering. In fact, AFAIAA, they lead quite miserable lives. Inability to feel pain is a side effect of some diseases, such as leprosy.

Whether or not the ability to suffer is necessary for consciousness is another matter. I can certainly imagine a conscious being who couldn't suffer.
 
one word answers without reasons are worse then nonsense.


Nonsense.

Your post was tainted by your endorsive use of 'soul' and 'spirit'.

I don't click on links offered by people who I don't both know and respect.

Your interest in Searle does nothing to recommend your opinions.

I could comment further, but why bother?

In summary,

Nonsense.
 
This is not accurate.

There are people who are alive and conscious who cannot feel pain. Their lives are difficult and they tend to die early.

If we ever build a conscious machine, I presume it will either feel pain or not, depending on how it's built.

I have to disagree. There are people who are unconscious too. But they are still people. It is still a hallmark attribute. And a robot will never be able to experience actual pain sensation because its not real life. It may get a simulation of it but not it in my opinion. But that is secondary to the issue and an attempt to ease the mind of spirit believers who get very upset about conscious machines because they think that will mean we are merely machines.
 
Status
Not open for further replies.

Back
Top Bottom