• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
piggy:

what if we are all living in a simulation right now?

wouldn't that sort of defeat your entire argument if it were true?

if you don't think we are living in a simulation, do you have any evidence to support your conclusion ?

What if Santa Claus gave you a ride on his sleigh? Woudn't that sort of defeat the argument that there is no Santa Claus?

Now let's parse out this "living in a simulation" thing.

If you mean a Matrix scenario, in which my body is living in some physical world but my brain is being fooled, then it falls under the category of "not impossible, but entirely irrelevant".

If you mean, however, that we actually are something like Sims people, well, this is simply impossible because it would -- again -- mean that a representation somehow generates an actual objectively real world which is not the plain old physical reality where the computer running the simulation exists. (And apparently does so with no energy fingerprint at all.)

And if that's true, then it makes sense to ask if Piglet gets scared of the dark when I close my copy of Winnie the Pooh.
 
Straw man.

Equivocation fallacy.

The phrase "whatever the brain does" clearly does not mean "a specific function performed by the brain".

If it was intended to mean that, then it's a horribly poor choice of words.

If you want someone to bring you a Pepsi, do you ask them to bring you "whatever they've got to drink"?
 
It seems that the computational literalists think it's OK to accept their unsupported philosophical position as a starting point: Consciousness is the product of relationships among bits of information, the result of logic alone, and unlike all other phenomena in this world it requires no interaction of mass and energy (these are only required to "run the logic" which itself is responsible for the event).

But if we're going to talk about consciousness, our starting point is biology, which is to say chemistry, which is to say physics.

This is also the baseline against which all claims must be compared and judged.

The literal interpretation is certainly intriguing (I side with Pinker, who sees the computational mind as a tremendously useful metaphor) and downright revolutionary if true... but before accepting it, I'm going to need to hear a coherent explanation which does not simply ignore physical reality and assert that logic alone -- which is to say, relationships among abstractions -- can do anything at all in the world of matter and energy.
 
For what it's worth, I also side with the idea that the "computational mind" is a useful metaphor for a human mind.

However, I also think it's possible for a literal computer to develop a conscious mind, if developed the right way.

This is not a contradiction: They would be different forms of minds. But, both would be minds none-the-less. The ultimate outcome would be the same, even if the proximate details are completely different.
 
What if Santa Claus gave you a ride on his sleigh? Woudn't that sort of defeat the argument that there is no Santa Claus?

Now let's parse out this "living in a simulation" thing.

If you mean a Matrix scenario, in which my body is living in some physical world but my brain is being fooled, then it falls under the category of "not impossible, but entirely irrelevant".

If you mean, however, that we actually are something like Sims people, well, this is simply impossible because it would -- again -- mean that a representation somehow generates an actual objectively real world which is not the plain old physical reality where the computer running the simulation exists. (And apparently does so with no energy fingerprint at all.)

And if that's true, then it makes sense to ask if Piglet gets scared of the dark when I close my copy of Winnie the Pooh.

Or if moving pebbles in the desert is a thought.
Or if the numbers on the computer screen at the bank is my worth.
Or if the number in my passport is my identity.

All part of the same problem.
 
Then take another look at those two sentences you quoted there.

You cannot say, at the same time, that consciousness is specific function of the brain (what it's doing under certain specific conditions and not under others) and that it is also "whatever the brain does".
You and I have been over this before. Consciousness != being conscious, but even granting you that equivocation, "awake" isn't exactly a specific function yet you're unable or unwilling to refine it further. That about cover it?
 
Last edited:
Straw man.

Equivocation fallacy.

Look, if you use a definition, then you should accept what it means. "Whatever the brain does" is a bad definition, because a lot of what the brain does is not directly concerned with consciousness. Piggy's definition - what the brain does when it's awake that it doesn't do when it's asleep - is a good definition - or at least it's better.

It seems that if you take people literally on this topic, then you're equivocating.
 
Why would I? The question doesn't even make sense, nor do I see how it might be relevant if it did.

I think the point is that if something is already not part of reality - a story, or a simulation, or a computer program - then it can be abstracted and referenced and encompassed as often as you like. Having a character in a story tell a story doesn't create a new world inside the old one - it remains part of our imagination. Things are either real or imaginary.
 
If you mean, however, that we actually are something like Sims people, well, this is simply impossible because it would -- again -- mean that a representation somehow generates an actual objectively real world which is not the plain old physical reality where the computer running the simulation exists. (And apparently does so with no energy fingerprint at all.)

I don't insist that it's necessarily impossible. I do think that it is entirely unproven that a computer simulation, no matter how complex, can result in the creation of a virtual world in which people think that they exist in the same sense that we exist.

However, the hypothetical "what if some example existed that entirely proved me right and you wrong?" question can always be posed for any argument.
 
You and I have been over this before. Consciousness != being conscious, but even granting you that equivocation, "awake" isn't exactly a specific function yet you're unable or unwilling to refine it further. That about cover it?

And yet again on this topic - the absence of fixed agreed definitions is not a good reason for accepting one hypothesis as proven. If "awake" and "asleep" aren't clearly defined, how are we supposed to think that we can design and create artificial minds? Shouldn't we understand what we are creating first?
 
And yet again on this topic - the absence of fixed agreed definitions is not a good reason for accepting one hypothesis as proven. If "awake" and "asleep" aren't clearly defined, how are we supposed to think that we can design and create artificial minds? Shouldn't we understand what we are creating first?
Did I say we should accept one hypothesis as proven?
Did I say awake and asleep weren't clearly defined?
Did I say we can (present tense) design and create artifical minds?
Did I say we could design and create artifical minds without understanding them first?

Actually that last one I did say, that's the principal advantage of neural emulation over simulation, but still you really need to hold back on the strawmans there, guy.
 
Last edited:
So far, we've been given an obviously over-broad application of Church-Turing which cannot be found in Church-Turing.

To be fair, misunderstanding of Church-Turing is something of a speciality of philosophers talking about consciousness, including Dennet, Churchland and many others. Some of them like to improve on it to suit whatever view of the universe they prefer. Given that, it's not surprising that many of the posters here take their Church-Turing with a little splash of something else - just enough to justify the computational viewpoint.

While checking for references, I found this paper, which seems interesting. I haven't gone through it thoroughly, but I stumbled upon this quote:

Piccinini said:
Perhaps brains are simply not computing mechanisms but
some other kinds of mechanisms. This view fits well with contemporary theoretical neuroscience, where much of the most rigorous and
sophisticated work assigns no explanatory role to computation.

If the brain is not performing computation*, then clearly any hypothesis that insists that it is will tend to inhibit understanding. I would be worried about that, but I suspect that neuroscientists will go where the data takes them, and leave the philosophers and AI programmers to catch up.


*I.E. the functioning of the brain is not explained by considering it as computation. It is possible to describe the operation of the brain - or indeed, any physical system - as computation. That doesn't make it appropriate to do so.
 
Last edited:
If you mean, however, that we actually are something like Sims people, well, this is simply impossible because it would -- again -- mean that a representation somehow generates an actual objectively real world which is not the plain old physical reality where the computer running the simulation exists. (And apparently does so with no energy fingerprint at all.)

Well, lets think about it a little.

First, do you claim to have non-subjective access to the external "objective" world of which you speak? If so, I would love to know how, because all the rest of us are stuck with using our senses and perceptions.

But assuming you do have such access ( you don't, but whatever, I won't dwell on it ), I don't think you can make such strong claims about the objective world.

We know that everything in the objective world is made of particles -- do you agree?

We are only able to detect particles via their interactions with other particles -- do you agree?

Furthermore the only exposure any particle has to any other particle is through these interactions -- do you agree?

If you agree with all of these premises, then you should agree that the only feature of particles that can be emulated is their interactions.

Now tell me where my logic breaks down:

1) we try to emulate the behavior of particle X by replacing it with a simulated particle inside a computer of infinite computing power and memory.

2) this doesn't work because now X has no way to interact with all of its neighbors in real space.

3) so we replace all of the neighbors of X with simulated counterparts, inside the same computer. Now X can interact with all its neighbors and they can interact with it.

4) however this just passes the buck to the neighbors of the neighbors of X. So we do the same thing.

5) eventually there are no particles in the universe, aside from the computer, and an entire simulated universe of particles inside the computer.

In this scenario, why haven't all the particles been emulated as well as simulated? If all that emulation requires is the ability to interact with neighboring particles, and that ability has been granted, why isn't it full emulation?
 
Did I say we should accept one hypothesis as proven?
Did I say awake and asleep weren't clearly defined?
Did I say we can (present tense) design and create artifical minds?

You really need to hold back on the strawmans there, guy.

I've made my position clear on numerous occasions. I've never gone beyond stating that the computational is unproven. Piggy may go a bit further. You'll have to ask him.

If it's claimed that my position is wrong - and my position is that consciousness remains unexplained, and that computation is not the certain explanation - then obviously a contrary position involves accepting that computation is the certain (or at least highly probable) explanation.
 
Light enters the eye and is transmitted to the brain, it called the sense of sight.

We touch a surface and the nerves in the fingers send a signal to the brain.


No great mysteries here unless you're going to call natural processes mysteries.

That sounds alright for groundhogs, caterpillars and elephants, etc., but they can't write poetry, paint master pieces or write songs like 'Cat Scratch Fever'.
 
If it's claimed that my position is wrong - and my position is that consciousness remains unexplained, and that computation is not the certain explanation - then obviously a contrary position involves accepting that computation is the certain (or at least highly probable) explanation.

All evidence so far points to you being wrong. It does not prove you wrong, not yet, just indicates. Your phrasing implied that I would argue it constitutes proof, which I would not.
 
All evidence so far points to you being wrong. It does not prove you wrong, not yet, just indicates. Your phrasing implied that I would argue it constitutes proof, which I would not.

All evidence? All? Well, that constitutes grounds for disagreement.
 
All evidence? All? Well, that constitutes grounds for disagreement.

I don't suppose you will elaborate and share the evidence you have in mind that *does* point to the computation taking place in the brain not being the source of consciousness ?

Of course not ....
 
Last edited:
If you mean, however, that we actually are something like Sims people, well, this is simply impossible because it would -- again -- mean that a representation somehow generates an actual objectively real world which is not the plain old physical reality where the computer running the simulation exists.
Well, yes. That's what a simulation is.

(And apparently does so with no energy fingerprint at all.)
What?

And if that's true, then it makes sense to ask if Piglet gets scared of the dark when I close my copy of Winnie the Pooh.
Non-sequitur.
 
For what it's worth, I also side with the idea that the "computational mind" is a useful metaphor for a human mind.

However, I also think it's possible for a literal computer to develop a conscious mind, if developed the right way.

This is not a contradiction: They would be different forms of minds. But, both would be minds none-the-less. The ultimate outcome would be the same, even if the proximate details are completely different.

Can you describe this "right way"?

And are you saying that a machine can be conscious (which is uncontroversial) or are you saying that consciousness can be achieved through programming alone (which is currently unsupported)?
 
Status
Not open for further replies.

Back
Top Bottom