• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Thus, it is widely taken on faith.

I no longer debate Church-Turing with Pixy. The reference he gave to the thesis explicitly refuted the interpretation he gave, but this made absolutely no difference to his faith in it.

Church-Turing proves nothing about the operation of the brain except if you assume that it is a particular kind of object to start with.
 
Yikes! An egorama!
Exactly! Now stop projecting.
I wasn't using ad hom to dismiss your arguments. It was a hypothesis for why your arguments had zero traction.
I hypothesize that you're in denial.
You are invited to make better arguments.
Shifting the burden.

You were the one that introduced the initial argument. I'm not making any arguments; I pointed out an error in your approach. In fact, I'm not even taking the position you think I am.

But that should be irrelevant. Onus probandi.
Argumentum ex silentio? Sometimes, silence makes the loudest sound.
Thought-terminating cliche.

Fact is, your original thought experiment did not work. I explained why. You should at a minimum accept this; if you had any inkling of vision, you should also accept the implication that you had a thought experiment that didn't work and proceed with caution.

Me? I'm just the messenger. I'm irrelevant. Stop trying to poke holes in me. Pay attention to the message; I dare you to even consider that it might be wrong, and find out why.

After all, if you are, as you say, a materialist, then shouldn't the type of explanations for why experiments don't show problems with materialism look like the objection I had to your original problem?

I've decided to stop wasting time on this for now.
 
Last edited:
But Pixy if we give up magic

Just as predicted.

It's not so much Church-Turing as the Church of Turing. Accept the true faith or be cast out. Actually demonstrating a scientific principle can be left aside. It's sufficient to denounce the heretics.
 
Getting back to the OP, do you think it's possible to explain to a layman what consciousness is if the layman has little understanding of computational processes? I suspect not.

I had a debate with a techie who thought IBM's Watson was not a big deal, that it only looked up words. My interest in AI started as a little kid when I realized a museum's tic-tac-toe playing machine was more than buttons wired to lights.

Speaking of Watson, I bet many laymen think it's conscious. Heck, I bet a lot of people think Honda's Asimov is, too.

As robots and AI become better and better at giving the illusion of a conscious agent with feelings, how will we explain to laymen where the threshold is between a great illusion and the real deal? This is an uphill climb.
 
Consciousness is computationally well defined?

You start by assuming that consciousness is a computational process, and using a series of deductive steps, you deduce that consciousness is a computational process. Be sure to insist that anyone who notices that the entire edifice is built on circular reasoning "believes in magic".

I said early on that the focus would shift, as it always does, from arguing about consciousness to characterising the motives of the people who fail to fall into line.
 
That's precisely the point. A Turing machine can simulate anything that happens in a biological system, but it doesn't replicate it.
If the simulation produces all of the relevant qualities of the thing it is simulating, either in a more abstract manner or not, then the simulation could qualify as replication of that system.

If an evolutionary system works out an optimal way for spiders to construct their webs, you might be prone to think it was "only a simulation".

However, if that computer was controlling robotic spiders, and the webbing had sensors to detect how effective it was for catching flies, I do not think it would be inaccurate to say the machine was replicating the evolutionary process.

ETA: But, you don't need a physcial system to go from simulation to replication. I was using the physical system to demonstrate that the line between the two is blurred, even in the software state.
 
Last edited:
I certainly don't know that qualia won't exist in (computer) simulations of biological systems. I know that no other biological systems operate in such simulations, and that there is no evidence that qualia exist in computer simulations.
But there is also no evidence that qualia exist in other people except that they say so.

It is also possible that certain accidental properties of the computers running the simulations coincide with certain biological processes, but this is nothing to do with the simulation. One should not confuse a simulation with the thing itself.
That's correct, though there is no hard dividing line outside the system boundary.
In principle, if you have the proper interface and if your model is accurate enough, you can make any simulation behave the same as the thing you're simulating.
For a simple instance, I could simulate some electronic circuit on a PC, insert the PC where the circuit was, and have everything behave exactly like it did, or I could simulate the conversion of sound to nerve impulses in the cochlea, and attach the output to your auditory nerve, making you hear through a simulated cochlea.
 
Last edited:
Getting back to the OP, do you think it's possible to explain to a layman what consciousness is if the layman has little understanding of computational processes? I suspect not.

It might be a good idea to try, though. If you start with a particular analogy for a system, it can force one into thinking that there's a literal correspondence.


I had a debate with a techie who thought IBM's Watson was not a big deal, that it only looked up words. My interest in AI started as a little kid when I realized a museum's tic-tac-toe playing machine was more than buttons wired to lights.

Speaking of Watson, I bet many laymen think it's conscious. Heck, I bet a lot of people think Honda's Asimov is, too.

As robots and AI become better and better at giving the illusion of a conscious agent with feelings, how will we explain to laymen where the threshold is between a great illusion and the real deal? This is an uphill climb.

There are people (mostly living alone) who think that the characters on their TV are conscious. It's quite possible that devices will be produced that will give the illusion sufficiently well that only people who fully understand their detailed operation will be able to doubt their conscious status.
 
Strictly speaking, no, it cannot.

But, hypothetically, there can be an even more vague sense of proto-qualia that could be experienced without consciousness.

Wait, wait.

So, a computer that can see through a colour camera doesn't have qualia, but proto-qualia ?

Assuming this is what you're saying, what is the distinction between qualia and proto-qualia ? Is it their relation to consciousness ? And, if it is, isn't that a bit circular ?
 
If the simulation produces all of the relevant qualities of the thing it is simulating, either in a more abstract manner or not, then the simulation could qualify as replication of that system.

If an evolutionary system works out an optimal way for spiders to construct their webs, you might be prone to think it was "only a simulation".

However, if that computer was controlling robotic spiders,

Then it wouldn't be a Turing machine. (Or, to be more precise - a necessary element of its functionality would not be part of the Turing model).

and the webbing had sensors to detect how effective it was for catching flies, I do not think it would be inaccurate to say the machine was replicating the evolutionary process.

ETA: But, you don't need a physcial system to go from simulation to replication. I was using the physical system to demonstrate that the line between the two is blurred, even in the software state.
 
Getting back to the OP, do you think it's possible to explain to a layman what consciousness is if the layman has little understanding of computational processes? I suspect not.

I had a debate with a techie who thought IBM's Watson was not a big deal, that it only looked up words. My interest in AI started as a little kid when I realized a museum's tic-tac-toe playing machine was more than buttons wired to lights.

Speaking of Watson, I bet many laymen think it's conscious. Heck, I bet a lot of people think Honda's Asimov is, too.

As robots and AI become better and better at giving the illusion of a conscious agent with feelings, how will we explain to laymen where the threshold is between a great illusion and the real deal? This is an uphill climb.

You are onto something here.

It seems defining consciousness as an illusion leaves the door open to make people believe that an illusion of consciousness is actually consciousness.

It is a neat trick and that's why we need to understand that the Turing Test applies both ways.
 
You have much computer science to learn, Luke. Turing Machine.

I am also skeptical that turing machine could produce the type of consciousness we have. We know a neuron machine can. We just aren't sure how.

That isn't my question. Everyone knows what a "turing machine" is.

However, a "turing machine" is an idealized mechanism -- they do not exist in reality.

So I am asking westprog what exactly he/she is referring to, since "turing machines" do not exist except as a fantasy.

What actually exist are "turing *equivalent*" processes, which people educated in computer science -- like myself, an artificial intelligence programmer -- understand are simply processes that could be a "turing machine" if they had infinite memory space and infinite time to perform their computations. Kind of like the human brain.

If you dispute that the human brain is turing equivalent, then you have to explain to me why I could instruct you on how to be a turing machine, have you sit down in front of a stack of papers, and perform any known algorithm that a turing machine can perform ( assuming you have enough paper and never grow old and die, and of course have a pencil sharpener ).
 
Last edited:
Are you saying that magic exists ?

"Magic" would be insisting that running a computer simulation of a process can produce the same effects as the process itself. No, I don't believe in that kind of magic.
 
You are onto something here.

It seems defining consciousness as an illusion leaves the door open to make people believe that an illusion of consciousness is actually consciousness.

It is a neat trick and that's why we need to understand that the Turing Test applies both ways.

I'm not sure if I agree, since the "illusion of consciousness" is properly observed from the inside not from the outside. It's too damn easy to look at a rock and give yourself the subjective experience (qual ;) ) that it's conscious.
 
Status
Not open for further replies.

Back
Top Bottom