• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Nobody claims that. Nobody has ever claimed that.

Baloney.

That's been claimed on every thread on this topic so far, by both you and RocketDodger.

That's why I put quotation marks around the phrase.
 
Yes. That's the magic bean. He can't say what it is or what it does or why it can't be simulated too, but damn if it isn't an essential prerequisite for consciousness.

Just because we haven't answered a question yet, doesn't mean that "magic" must be the answer.

Was "magic" the explanation for the northern lights up to the time we understood about the interaction of solar winds and the earth's magnetic field?

Of course not.
 
...but an excellent simulation of a tornado will act just like a tornado. Why wouldn't an excellent simulation of a conscious brain act just like a brain and therefore be conscious?

No. It won't.

Only a model of a tornado, like ones in a tornado box, will do that.

A digital simulation of a tornado must be interpreted by an observer. It will not do what a tornado does. It will have no windspeed, for example, only an output which a properly built brain can interpret as a windspeed.
 
No, if humans were extinct and the computer was powered up and running it would continue experiencing the game. The game would not disappear.

It would continue running, but there would be no game, only lights and sounds.

If a brain running outside a body were dreaming, would it not be experiencing a dream if no other humans were around?

Yes, and this is the difference between the computer and the brain.
 
Yes, I did answer your question.

If you program a computer to simulate every molecule in the body, and run that simulation, why wouldn't it be "digesting"?

Answer that, and you have your answer for consciousness.

OK, then would a perfect simulation of a brain still report that it was experiencing consciousness?

If it did, would it be lying or deluded?

If it didn't, why wouldn't it?
 
OK, I really do want to stop talking about computers, because the questions in the poll are badly formed.

Essentially, the poll is asking "Can computers be conscious, or is consciousness supernatural?"

Ask that about any other bodily function, and you see the error immediately.

Granted, we don't understand how the body pulls off this function, but it's a bodily function nonetheless.

So here's a challenge to the computeristas, and if you can come up with a decent answer, then there may be something to talk about….

Let's go back to our thought experiment where we have 4 animals looking at the sky -- a human with a normal sober brain, a human with tritanopia, a human who just ate a little psilocybin mushroom, and a dog.

The light from the sky produces 4 different results in each brain. One sees blue, the second sees green, the third sees yellow, and the fourth sees gray.

OK, so you set up a sensor to your computer and point it to the sky.

What color does your computer see? Blue, green, yellow, gray, or some other color?

And why?

And at what point in the process is that color produced?
 
OK, then would a perfect simulation of a brain still report that it was experiencing consciousness?

Who cares?

It's trivial to make a computer report things.

You can program a computer to report anything in response to anything.

I can set up a computer to report "I see red" when light is shined on a sensor.

Or I can set it up to report "I see green" or "I smell cinnamon" or "I am the Queen of England".
 
Just because we haven't answered a question yet, doesn't mean that "magic" must be the answer.

Was "magic" the explanation for the northern lights up to the time we understood about the interaction of solar winds and the earth's magnetic field?

Of course not.
You don't have the slightest idea what we're talking about, do you? You literally stopped reading at "magic," and chose to take umbrage at the word itself rather than understanding what it meant for your argument. Call it phlogiston or dumbsucker theory or whatever you like, you're still asserting something's existence with no evidence but a dislike for the alternative.
 
OK, I really do want to stop talking about computers, because the questions in the poll are badly formed.

Essentially, the poll is asking "Can computers be conscious, or is consciousness supernatural?"

Ask that about any other bodily function, and you see the error immediately.
And yet again, you dodge the questions that highlight your fundamental error.

Who cares?
You do, apparently, since you go so far out of your way to avoid the issue.
 
Of course it's OK to discuss here.

I can't imagine it's productive, but it's certainly OK.

It's just not anything I care to discuss, so I'll bow out of that portion of the thread.
And yet you constantly want to discuss exactly that: you want to tell us that consciousness has something that inherently makes it impossible for a computer to be conscious.

This is philosophy (and bad philosophy), and has nothing to do with studying animal brains.
 
Baloney.

That's been claimed on every thread on this topic so far, by both you and RocketDodger.

That's why I put quotation marks around the phrase.

Are you planning on answering my question about consciousness requirements, and also give us a definition of it ? Or am I on your ignore list, now ?
 
And yet again, you dodge the questions that highlight your fundamental error.


You do, apparently, since you go so far out of your way to avoid the issue.

Yes, Piggy, I am disappointed in you. As interested as you are in the nature of consciousness, when we are just about to pin you down on your dualism woo, you dodge with "who cares?" responses.

Your answer to my "perfect simulation of the brain" was particularly dodgy. A perfect simulation of the brain would not be programmed to say "I see red" when red signals came in from the eye. It would be programmed to do what the brain did. A brain sim would sim the actions of neurons. That's it. No special case "I see red" programming. If it's programmed to act like the brain, why wouldn't it act like the brain?

BTW: What are your computer science credentials, Piggy?
 
I really do need to keep my promise and get off this topic, but since I posed a question….

I asked how one would build a machine that doesn't just respond to light, but actually produces color, the way our brains do.

One answer is to create a perfectly detailed virtual digital simulation of a normal human brain, which produces color in response to light.

But there's a huge problem with that answer.

It relies on the assumption that there exists a "Pinocchio point" at which a simulation becomes so detailed that the machine takes on the qualities and behaviors of what's being simulated.

But Pinocchio points do not exist -- no matter how detailed the simulation, the machine running the simulation never takes on the qualities and behaviors of the system being simulated.

In other words, there is no sufficiently detailed digital simulation of oxidation which will cause the computer to rust. And there is no sufficiently detailed simulation of a tornado which will cause the computer to have a windspeed, or of an aquarium which will cause the computer to be wet.

And this fact does not change simply because we are trying to simulate a bodily function, whether that's digestion or heartbeat or consciousness. Even a perfect simulation of digestion will never cause us to point at the machine and say that it is digesting.

And remember, our goal is to make a conscious machine -- the machine itself must be doing whatever is necessary to be conscious, just as our bodies are.

"But wait," some have said, "you're committing a framing error. You can't look at the machine -- you have to use the simulation itself as a frame of reference."

Two problems with that.

First, there is no such frame of reference.

Where do simulations exist? In other words, if I run a simulation of a tornado, where's the tornado? If I want to talk about what's going on in the tornado, what frame of reference do I use?

It can't be the frame of reference of the machine, because the computer running the sim has no windspeed and can't knock down houses.

So the simulated tornado doesn't exist in the machine. Where, then, does it exist?

This is where systems theory comes in.

In a system which includes only the machine, there is no tornado.

The simulated tornado only exists in the mind of an observer which is properly built to interpret the actions of the machine as a tornado.

The simulated tornado depends on a system including the machine and an observing mind.

So, for example, suppose aliens wiped out the human race and took over the world, and walked up to our machine running the virtual digital sim….

Now, these aliens have evolved on another planet, so we cannot expect that they share our same sensory apparatus or conscious qualia. Perhaps they have qualia that let them consciously experience magnetic fields (as birds likely do). They probably respond to light, but the odds are extremely small that they respond to the same tiny band of the spectrum that we do. They may respond to many of the same chemicals we can smell, but they won't have evolved to experience odors as we do -- who knows what qualia they produce in response.

And obviously, our numbers and letters are meaningless to them.

The computer running the sim is not, as we've established, actually creating a tornado. It is only producing non-tornado-like output which is designed to trigger ideas about tornados in human brains.

The patterns of pixels on the screen, the patterns of waves emanating from the speakers, the information in the printouts, all of these are tailored to fool a human brain specifically. The aliens, even if they've seen real tornados, won't be able to recognize a tornado in the simulation, no matter how detailed it is. It won't even sound like a tornado to them, because we haven't bothered to tailor the sound to their range of hearing (if they have such a sense).

The only way to make them experience a tornado is to throw a real tornado at them.

The simulation only exists in a system containing the machine running the sim and a brain built to understand it. There is no "tornado" in some frame of reference independent of both the raw physical actions of the machine and the imagination of the observer. The "world of the simulation" does not exist.

But wait, isn't sufficient information about the tornado preserved in the actions of the machine, so that we can say there's a tornado in those actions?

No. Again, go back to the analogy of the rock, water, and shore.

The properties of the rock don't transfer to the water, and the properties of water don't transfer to the shore.

The machine doesn't preserve or reproduce any actual qualities of the tornado. It translates patterns into another medium. In doing so, it preserves information about the tornado, but not the tornado.

And that information is open-ended.

What I mean by this is that you can take that same information and use it to describe a different system. For example, you can invert your axes. You could interpret the temporal data as if it were spatial data. There are all kinds of things you can do to get a different system from the same data.

And you can't avoid this by making a complete simulation. First, there's Heisenberg to deal with -- which makes completely perfect simulation impossible -- but even without that problem, an interpreter has to decide what background to anchor the information upon. You could, of course, program that in, but it's infinite regression, because then that system can be construed against different backgrounds.

And even if you could avoid those problems, you've got a third problem, because without outside information, there's no way for an interpreter to know that the simulation is complete -- it could just as well be an incomplete simulation of something else!

From the point of view of the machine, there's no way to determine what it's supposed to be running a simulation of. There are always infinite options.

So now let's return to our proposed solution -- can we make a machine "see blue" by running a perfect simulation of a human body looking at a perfect simulation of a clear daytime sky, right down to the molecules and photons?

Nope.

Why?

Because to make a machine "see blue" in the real world it must be doing what our brains are doing in the real world when we see blue. When we look at the machine itself, the raw physical apparatus, we must see actual physical processes that mimic what our brains are doing when we see blue -- or else some other physical processes which have the same result. And we can't shift our frame of reference to any "world of the simulation" because such worlds are imaginary.

But wait….

We know that you can get a workstation to simulate a workstation, so isn't the brain a special case? Isn't it the case that a brain is also a general purpose computer, and therefore a computer simulating a brain is just like a computer simulating a computer?

No.

That's because it's not true that the brain is, in all its functions, a general purpose computer. (That link explains it well enough, so I refer you to that explanation.)

And specifically, when we look at what the brain is doing when conscious experience is going on, we see very un-computer-like behavior. Conscious experience relies on behavior that is tightly synchronized and coordinated in time. Also, the signature waves must be generated, become coherent, and strengthen sufficiently.

This process is time-dependent; it can't be run at any arbitrary speed. This alone throws the "Turing machine" analogy right out the window.

Also, the result does not in any way resemble the output of a Turing machine or a general purpose computer. The result is this hologram-like-thing I've been calling the phenogram, which is the result of a physical process, not a symbolic calculation.


OK, so we can't shortcut the process by running a perfect simulation of a brain that sees blue -- first of all because perfect sims are impossible, but also because even a perfect sim wouldn't create a Pinocchio point for the machine.

Well, can we simply rig the computer to respond differently to what we see as "blue light" than it does to other types of light which we see as not blue?

No.

Why not?

Well, the answer should be clear from my post defining consciousness. Differential behavior is simply the old bounce-back system. We know we can get that without involving consciousness at all. So that's no solution.

At the end of the day, there are no such shortcuts.

We have seen by observing animal brains that the A-->B-->C neural chain does not, by itself, generate conscious experience.

If it did, we'd be conscious of everything going on in our brains. But we're not. Not by a long shot.


Nor can we say that "self-referential" neural behavior is sufficient. There are self-referential feedback loops all over the brain which have no effect on our conscious experience.

And we can't extrapolate that out to higher-order "self-reference" -- as in self-awareness, knowing that I am a conscious being -- because such higher-order thought demands that consciousness already be present as a pre-requisite.

It's not the neural chain that "feeds" our conscious experience, that generates qualia, that sparks the phenogram: It's tightly synchronized oscillations in specific areas of brain real-estate, in the presence of a trio of signature deep brain waves, that somehow (nobody yet knows how) translates a subsection of that neural activity -- which contains, in part, neural translations of the stuff that bounces off our bodies and bounces around inside our bodies -- into something entirely different from either neural activity or the activity of stuff in the outside world.

So what do we need to do in order to make a machine "see blue" in response to light from the sky?

It's not enough to make it respond overtly to the light, because that can be done without making the machine generate a phenogram.

No, at the end of the day, the only way to make a machine "see blue" or have any other conscious experience will be to figure out how our own brains produce this truly bizarre bodily function which results not just in some overt response, not just in some chain reaction among components, but in conscious experience which is something different from the neural activity, and then figure out how to make a machine do the same.

But however it's done, it will have to be done in the real world, not a virtual simulation.

That's how we know that it's a hardware problem, at least in part. There can be no programming-only solutions.

OK, that's said, now I really do want to extricate myself from the tar-baby of "computer consciousness" speculations and focus instead on the real meat and potatoes -- the biology of consciousness in animals.
 
BTW: What are your computer science credentials, Piggy?

The question is, what are the computeristas' credentials in cognitive neurobiology and general physics?

It's blindspots in those fields, along with a too-literal approach to information theory, which cause the bulk of their errors.

But enough of that... time to talk animal brains.
 
OK, I really do want to stop talking about computers, because the questions in the poll are badly formed.

Essentially, the poll is asking "Can computers be conscious, or is consciousness supernatural?"

Ask that about any other bodily function, and you see the error immediately.
Granted, we don't understand how the body pulls off this function, but it's a bodily function nonetheless.

So here's a challenge to the computeristas, and if you can come up with a decent answer, then there may be something to talk about….

Let's go back to our thought experiment where we have 4 animals looking at the sky -- a human with a normal sober brain, a human with tritanopia, a human who just ate a little psilocybin mushroom, and a dog.

The light from the sky produces 4 different results in each brain. One sees blue, the second sees green, the third sees yellow, and the fourth sees gray.

OK, so you set up a sensor to your computer and point it to the sky.

What color does your computer see? Blue, green, yellow, gray, or some other color?

And why?

And at what point in the process is that color produced?

Thinking is just like crapping?
 
I really do need to keep my promise and get off this topic, but since I posed a question….

I asked how one would build a machine that doesn't just respond to light, but actually produces color, the way our brains do.

One answer is to create a perfectly detailed virtual digital simulation of a normal human brain, which produces color in response to light.

But there's a huge problem with that answer.

It relies on the assumption that there exists a "Pinocchio point" at which a simulation becomes so detailed that the machine takes on the qualities and behaviors of what's being simulated.

But Pinocchio points do not exist -- no matter how detailed the simulation, the machine running the simulation never takes on the qualities and behaviors of the system being simulated.

In other words, there is no sufficiently detailed digital simulation of oxidation which will cause the computer to rust. And there is no sufficiently detailed simulation of a tornado which will cause the computer to have a windspeed, or of an aquarium which will cause the computer to be wet.

And this fact does not change simply because we are trying to simulate a bodily function, whether that's digestion or heartbeat or consciousness. Even a perfect simulation of digestion will never cause us to point at the machine and say that it is digesting.

And remember, our goal is to make a conscious machine -- the machine itself must be doing whatever is necessary to be conscious, just as our bodies are.

"But wait," some have said, "you're committing a framing error. You can't look at the machine -- you have to use the simulation itself as a frame of reference."

Two problems with that.

First, there is no such frame of reference.

Where do simulations exist? In other words, if I run a simulation of a tornado, where's the tornado? If I want to talk about what's going on in the tornado, what frame of reference do I use?

It can't be the frame of reference of the machine, because the computer running the sim has no windspeed and can't knock down houses.

So the simulated tornado doesn't exist in the machine. Where, then, does it exist?

This is where systems theory comes in.

In a system which includes only the machine, there is no tornado.

The simulated tornado only exists in the mind of an observer which is properly built to interpret the actions of the machine as a tornado.

The simulated tornado depends on a system including the machine and an observing mind.

So, for example, suppose aliens wiped out the human race and took over the world, and walked up to our machine running the virtual digital sim….

Now, these aliens have evolved on another planet, so we cannot expect that they share our same sensory apparatus or conscious qualia. Perhaps they have qualia that let them consciously experience magnetic fields (as birds likely do). They probably respond to light, but the odds are extremely small that they respond to the same tiny band of the spectrum that we do. They may respond to many of the same chemicals we can smell, but they won't have evolved to experience odors as we do -- who knows what qualia they produce in response.

And obviously, our numbers and letters are meaningless to them.

The computer running the sim is not, as we've established, actually creating a tornado. It is only producing non-tornado-like output which is designed to trigger ideas about tornados in human brains.

The patterns of pixels on the screen, the patterns of waves emanating from the speakers, the information in the printouts, all of these are tailored to fool a human brain specifically. The aliens, even if they've seen real tornados, won't be able to recognize a tornado in the simulation, no matter how detailed it is. It won't even sound like a tornado to them, because we haven't bothered to tailor the sound to their range of hearing (if they have such a sense).

The only way to make them experience a tornado is to throw a real tornado at them.

The simulation only exists in a system containing the machine running the sim and a brain built to understand it. There is no "tornado" in some frame of reference independent of both the raw physical actions of the machine and the imagination of the observer. The "world of the simulation" does not exist.

But wait, isn't sufficient information about the tornado preserved in the actions of the machine, so that we can say there's a tornado in those actions?

No. Again, go back to the analogy of the rock, water, and shore.

The properties of the rock don't transfer to the water, and the properties of water don't transfer to the shore.

The machine doesn't preserve or reproduce any actual qualities of the tornado. It translates patterns into another medium. In doing so, it preserves information about the tornado, but not the tornado.

And that information is open-ended.

What I mean by this is that you can take that same information and use it to describe a different system. For example, you can invert your axes. You could interpret the temporal data as if it were spatial data. There are all kinds of things you can do to get a different system from the same data.

And you can't avoid this by making a complete simulation. First, there's Heisenberg to deal with -- which makes completely perfect simulation impossible -- but even without that problem, an interpreter has to decide what background to anchor the information upon. You could, of course, program that in, but it's infinite regression, because then that system can be construed against different backgrounds.

And even if you could avoid those problems, you've got a third problem, because without outside information, there's no way for an interpreter to know that the simulation is complete -- it could just as well be an incomplete simulation of something else!

From the point of view of the machine, there's no way to determine what it's supposed to be running a simulation of. There are always infinite options.

So now let's return to our proposed solution -- can we make a machine "see blue" by running a perfect simulation of a human body looking at a perfect simulation of a clear daytime sky, right down to the molecules and photons?

Nope.

Why?

Because to make a machine "see blue" in the real world it must be doing what our brains are doing in the real world when we see blue. When we look at the machine itself, the raw physical apparatus, we must see actual physical processes that mimic what our brains are doing when we see blue -- or else some other physical processes which have the same result. And we can't shift our frame of reference to any "world of the simulation" because such worlds are imaginary.

But wait….

We know that you can get a workstation to simulate a workstation, so isn't the brain a special case? Isn't it the case that a brain is also a general purpose computer, and therefore a computer simulating a brain is just like a computer simulating a computer?

No.

That's because it's not true that the brain is, in all its functions, a general purpose computer. (That link explains it well enough, so I refer you to that explanation.)

And specifically, when we look at what the brain is doing when conscious experience is going on, we see very un-computer-like behavior. Conscious experience relies on behavior that is tightly synchronized and coordinated in time. Also, the signature waves must be generated, become coherent, and strengthen sufficiently.

This process is time-dependent; it can't be run at any arbitrary speed. This alone throws the "Turing machine" analogy right out the window.

Also, the result does not in any way resemble the output of a Turing machine or a general purpose computer. The result is this hologram-like-thing I've been calling the phenogram, which is the result of a physical process, not a symbolic calculation.


OK, so we can't shortcut the process by running a perfect simulation of a brain that sees blue -- first of all because perfect sims are impossible, but also because even a perfect sim wouldn't create a Pinocchio point for the machine.

Well, can we simply rig the computer to respond differently to what we see as "blue light" than it does to other types of light which we see as not blue?

No.

Why not?

Well, the answer should be clear from my post defining consciousness. Differential behavior is simply the old bounce-back system. We know we can get that without involving consciousness at all. So that's no solution.

At the end of the day, there are no such shortcuts.

We have seen by observing animal brains that the A-->B-->C neural chain does not, by itself, generate conscious experience.

If it did, we'd be conscious of everything going on in our brains. But we're not. Not by a long shot.


Nor can we say that "self-referential" neural behavior is sufficient. There are self-referential feedback loops all over the brain which have no effect on our conscious experience.

And we can't extrapolate that out to higher-order "self-reference" -- as in self-awareness, knowing that I am a conscious being -- because such higher-order thought demands that consciousness already be present as a pre-requisite.

It's not the neural chain that "feeds" our conscious experience, that generates qualia, that sparks the phenogram: It's tightly synchronized oscillations in specific areas of brain real-estate, in the presence of a trio of signature deep brain waves, that somehow (nobody yet knows how) translates a subsection of that neural activity -- which contains, in part, neural translations of the stuff that bounces off our bodies and bounces around inside our bodies -- into something entirely different from either neural activity or the activity of stuff in the outside world.

So what do we need to do in order to make a machine "see blue" in response to light from the sky?

It's not enough to make it respond overtly to the light, because that can be done without making the machine generate a phenogram.

No, at the end of the day, the only way to make a machine "see blue" or have any other conscious experience will be to figure out how our own brains produce this truly bizarre bodily function which results not just in some overt response, not just in some chain reaction among components, but in conscious experience which is something different from the neural activity, and then figure out how to make a machine do the same.

But however it's done, it will have to be done in the real world, not a virtual simulation.

That's how we know that it's a hardware problem, at least in part. There can be no programming-only solutions.

OK, that's said, now I really do want to extricate myself from the tar-baby of "computer consciousness" speculations and focus instead on the real meat and potatoes -- the biology of consciousness in animals.


As the argument grows weaker the posts grow longer.
 
As the argument grows weaker the posts grow longer.
To be fair, there are some people in this thread asking Piggy to explain his position better/more completely. It should hardly be a surprise that that would result in an even longer wall of text.

I however, am not one of those people, I've heard enough of it the last half dozen times it was reposted. So I can skim the post for anything new (nope) and brush the whole thing off with a tl;dr.
 
The question is, what are the computeristas' credentials in cognitive neurobiology and general physics?

It's blindspots in those fields, along with a too-literal approach to information theory, which cause the bulk of their errors.

But enough of that... time to talk animal brains.

No, time to define "conscious".
 
Status
Not open for further replies.

Back
Top Bottom