• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
...I would expect that any good philosopher would have considered that God might well be a highly evolved AI, why not? It makes a lot more sense than some of the other notions.
Philosophers probably have speculated on the nature of God.

You ask "why not?"; one very good reason comes to mind - it makes little sense to speculate on the nature of something you have no reason to suppose exists at all. Even less to speculate that this unproven entity might be the manufactured product of yet other unproven entities... Ockham's Razor and all that.
 
As I see it the distinction is that a simulation is a projection of a scenario (tornado) onto/through a medium like a monitor screen. The simulation is coded in such a way that what is seen on the screen by an observer resembles a tornado.

So if we analyze what is going on here we will see that only the observer is aware of the resemblance of a tornado (in his/her mind) and this comes from a sequence of patterns generated on an illuminated screen.

I agree.

However this is irrelevant, because if we analyze what is going on anywhere, we see that only an observer is aware of the resemblance of a real tornado to a real tornado as well.

Simulated tornado, real tornado, to be called "tornadoes" they both require an observer capable of recognizing that they have a resemblance to the notion of "tornado" that the observer understands.
 
I just got a private message from someone who wondered why I'd left this thread, but couldn't answer him because his incoming message box is not set up. Here's my response to him:

Thanks for your interest. I drifted off the thread because it just seemed like too much back and forth on the same issues -- is there a magic bean of consciousness or not? Is a "Turing Machine" a full fledged computer or not?" It just seemed too much like a "yes it is, no it isn't" infinite circle and no progress was being made. Yesterday I had a new thought to contribute to the thread, and was contemplating rejoining. Maybe I will. All my best wishes.
 
Last edited:
A mechanical fish has been developed that other fish find so compelling that it gets promoted to the top of their hierarchy.

A pole dancing robot made the news. I don't doubt that a robot pole dancer can be made that would arouse a man, or even arouse men more than a live woman could.

What would we have to do to program our pole dancing robot to, itself, feel sexually aroused?

 
It will be programmed to have consciousness (well not programmed but built to have consciousness - I doubt programming alone would suffice) and to be sophisticated enough with the English language to answer fairly sophisticated abstract questions like "are you conscious?" truthfully. How is that "programmed to come to that conclusion"? If it isn't conscious it is programmed to say it is not. At that point it would be back to the drawing board for the consciousness building engineers.

In fact it could be that the standard robot chassis for truthfully answering sophisticated questions in English is provided by some third party a la Asimov. The scientist merely has to add the attempted consciousness module and pop the question. Easy. Easy assuming you can build a talking robot already.
Anyone suspecting the experiment is rigged or flawed need only repeat it for themselves as with all other experiments in science.


This appears to be the "If we solved the problem of consciousness then wouldn't we have solved the problem of consciousness" hypothetical. It assumes so much that it doesn't leave us much that we can conclude. Of course if we could produce a "consciousness module" that would be very interesting. The question is how to do that.
 
Would the fact that Data can have dreams and nightmares be evidence of his consciousness?

It would be evidence to Data. It might be possible for Data to have a full range of conscious experiences, but we couldn't tell whether he really was conscious, or if he was just pretending like a Cylon.
 
Consciousness is easy to define and has been defined. It is true there is some confusion because people often use the word in slightly different ways. There's nothing magic or complicated about it at all. It's a thing like anything else. In any case "conscious" is a word so you can always define it, it's more a question of whether the definition is useful or not.

That paragraph might benefit from the actual easy definition of consciousness being included.
 
Tell them you don't agree with their definition of consciousness - although they're probably well aware of that.

Continuing to argue as if they're using the same definitions as you is a pointless exercise, because it's arguing a straw man.

It's a bit more than a matter of which definition is chosen. The implication of using a particular definition of consciousness is to imply that the is no significant qualitative difference between the consciousness of a human being and the consciousness of a computer program. Otherwise, why use the word "consciousness"?

The computationalist claim is precisely that there is nothing in the human experience that wouldn't be completely fulfilled by the appropriate computer simulation. These are disagreements of substance, not of definition.
 
I don't think it is - my point was aimed at those who think that an imitation, simulation, or mimic of consciousness, isn't or can't be conscious even when you can't tell the difference when interacting with it; i.e. suggesting that the Turing Test isn't sufficient because it could be passed by a consciousness 'mimic'. It's moving the goalposts by redefinition again, another 'consciousness-of-the-gaps'.

As has been pointed out, the Turing test isn't even supposed to be a test of consciousness. However, as a test for anything, it's absurdly subjective. What if Dr Jones does the test one day, and finds the machine is conscious - and the next day Professor Smith does it, and finds that it isn't. Is the machine conscious one day, and not the next, purely based on the impression some scientist gets from it?

Science is about objective, repeatable tests. It's not about faith-based acceptance.
 
Thing is, people mean different things by 'internal architecture'. There is physical architecture and there is software architecture.

If I understand it correctly, one lot are saying that the relevant physical architecture of the brain must be physically modeled in the artificial version, and the other lot are saying that the relevant physical architecture can be modeled in software to the same effect.

No doubt someone will correct me if I have this wrong.

There are actually at least three positions - that consciousness is a matter of "software architecture" - that it is a matter of "physical architecture" - or that we don't actually know which it is. The third is my position. I suppose that there might also be people who claim that it is a matter of "spiritual architecture".
 
The computationalist claim is precisely that there is nothing in the human experience that wouldn't be completely fulfilled by the appropriate computer simulation. These are disagreements of substance, not of definition.
So, is your operational definition of conscious then "everything in the human experience exactly as it is an a human"?
 
Everything except why everyone was "lying" to him. He'd have to come up with a theory to explain why everyone he talked to was a liar on the topic of consciousness but not on any other topic. Or he could chose to believe the simpler explanation per Occam's Razor.

Similarly I could choose to not believe that Australia exists.

A refusal to accept unsubstantiated testimony is fairly common. Scientists don't believe in ghosts, out of body experiences, levitation, or any number of things supported by witnesses. Even when they keep an open mind, they prefer to assume that rather than some entirely unexplained phenomenon, the workings of the human brain are not 100% reliable, and that people will tell them things which are not in fact true.
 
OK. Good to see you dropped the tornado nonsense. Did you go and read those posts again properly this time?

OK piggy is saying it takes matter and energy and physics to make a consciousness. But then he also seems to be saying that what goes on in a simulation isn't matter and energy and physics therefore it is impossible to create consciousness via a simulation.

Why isn't what goes on in a simulation matter and energy and physics?

What is simulated physics?

We all seem to be saying that living things are biological machines programmed by their environments and that we have at least one example of such a biological machine which is conscious (Humans).

Why would programming not work to make a non-biological machine conscious?

What if we made a simple machine that was programmed to build replicas of itself and sent it off into space, could its descendants eventually evolve intelligence and consciousness, given millions or billions of years?

Yes, what goes on in a computer is "matter and energy and physics". MEP, for short.

What goes on in your oven is MEP. You use MEP to make a cake. Does that mean that any combination of MEP will produce cakes? Obviously not. In fact, for any physical phenomenon or object, we find that it is created by a particular type of MEP. It's also inherent in the definition that other types of MEP won't create that phenomenon/object.

The computationalist claim is that any kind of MEP can produce consciousness, provided that similar patterns arise.

Nobody has claimed that a simulation of a tornado is not a physical process. They've just said that it isn't a tornado. It becomes a tornado, perhaps, if you were to plug a really big fan into it. Then it wouldn't be a tornado because it was a good simulation of a tornado. It would be a tornado because it was moving a lot of air around quickly.
 
I just got a private message from someone who wondered why I'd left this thread, but couldn't answer him because his incoming message box is not set up. Here's my response to him:

Thanks for your interest. I drifted off the thread because it just seemed like too much back and forth on the same issues -- is there a magic bean of consciousness or not? Is a "Turing Machine" a full fledged computer or not?" It just seemed too much like a "yes it is, no it isn't" infinite circle and no progress was being made. Yesterday I had a new thought to contribute to the thread, and was contemplating rejoining. Maybe I will. All my best wishes.

I think you can safely rejoin at any point, from now into an indefinite future.
 
The implication of using a particular definition of consciousness is to imply that the is no significant qualitative difference between the consciousness of a human being and the consciousness of a computer program.
I'm not sure that's the case; more precisely, it depends on the definition. I don't see that a direct comparison with human intelligence is necessarily implied, intended, or appropriate.

Otherwise, why use the word "consciousness"?
What other word is more appropriate for the nature of what is being described and labelled?

It might be helpful to consider animal consciousness. Most of us would agree that other animals have various degrees of consciousness. Would you be happy to say there is no qualitative difference between human consciousness and that of the most minimally conscious animal you can think of? Are levels of consciousness purely quantitative?

It seems to me that there are probably many different qualitatively different forms of consciousness, that share certain common features (e.g. self-awareness).

The computationalist claim is precisely that there is nothing in the human experience that wouldn't be completely fulfilled by the appropriate computer simulation. These are disagreements of substance, not of definition.
That's not my understanding of the majority of computationalist claims (of which there is a variety). The basic claim is that consciousness is a computational process, therefore it is theoretically possible to produce an artificial consciousness by computational methods. Some would extend this to producing a human-like consciousness by computational modeling inspired by a detailed knowledge of the functioning of the human brain.

However, the full gamut of human experience involves the physical and mental development and interaction of a human body and brain over many years in a variety of environmental contexts; I don't recall any computationalists claiming that all this could be be 'completely fulfilled by the appropriate computer simulation', except in the most speculative discussion. In my experience, most discussions in this area focus around the probable differences in human and artificial conscious experience, as is the case in most science fiction treatments (Asimov's robots, HAL in 2001, Blade Runner, Marvin the paranoid android, Kryten in Red Dwarf, et al).
 
So, is your operational definition of conscious then "everything in the human experience exactly as it is an a human"?

I would say that if something can have subjective experiences, then it's conscious, in some sense.
 
Status
Not open for further replies.

Back
Top Bottom