• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Wait, let me get this straight….

You're claiming that your body produces no phenomenology?

And you're expecting me to believe that?

Lots of things that appear to have certain features end up not having them. Who would've thought that matter was actually mostly empty space, or that relative speeds of two accelerating objects wasn't linear, or that the double-slit experiment would end up this way, just decades ago ? It seems to me like you are basing your incredulity on your inability to imagine that your impressions, with respect to your conscious experience, are wrong.

Do you really think "qualia" have an actual existence ? Or do you think they are simply part of the process of consciousness ? Or what ?
 
Noise relative to what?

I'll ask you what I'm asking others.

If you propose that you can replicate a behavior performed by animal bodies in a machine, what behavior is it that you propose to replicate?

Thinking (i.e., replicate all the behaviors found in organic brains). We can do that with artificial hearts, artificial kidneys, synthetic skin, to name a few.

The other organs will all eventually have a synthetic or mechanical analogue. It's just a tech limit right now. What makes the brain any different? What makes organic material so special? Organic neurons are not much different than organic ropes- Just atoms arranged in certain ways. So if functionality could be preserved with a brain made out of rope neurons (a big if), it should be conscious. Why would it not be conscious?
 
Thinking (i.e., replicate all the behaviors found in organic brains). We can do that with artificial hearts, artificial kidneys, synthetic skin, to name a few.

The other organs will all eventually have a synthetic or mechanical analogue. It's just a tech limit right now. What makes the brain any different? What makes organic material so special? Organic neurons are not much different than organic ropes- Just atoms arranged in certain ways. So if functionality could be preserved with a brain made out of rope neurons (a big if), it should be conscious. Why would it not be conscious?

I agree with all of that, except for the conscious rope brain.

Ropes can't perform colors or odors or pain or pleasure or any such things.

You can't make a digestive system out of rope, nor can you make a conscious brain out of rope. It's just the wrong material.

But certainly in theory there's no reason to believe that machine consciousness is impossible. I say "in theory" because we don't yet know if it will prove feasible, given human abilities and resources.

My guess -- and it's only a guess -- is that consciousness will turn out to be a deceptively simple thing once we figure it out. And if I'm right, I hope to Chaos that I will be dead when we do.

But it's one thing to say that machine consciousness is possible in theory. It's quite another to propose that there can be a programming-only solution. Bodily functions can't be programmed, you can't program an artificial liver or an artificial heart, there has to be hardware designed to carry them out. So a conscious machine could certainly have a computer as a component, but at the moment it would appear that in order for a machine to produce a phenogram like the ones produced by our bodies, there's going to have to be some specialized hardware involved for that purpose.
 
I agree with all of that, except for the conscious rope brain.

Ropes can't perform colors or odors or pain or pleasure or any such things.

Neurons can't either. A brain composed of nothing but neurons won't work.

You can't make a digestive system out of rope, nor can you make a conscious brain out of rope. It's just the wrong material.

Is that your big objection? You have no problem with machine consciousness (given a machine-brain that does all the functions that an organic brain does)?


But certainly in theory there's no reason to believe that machine consciousness is impossible. I say "in theory" because we don't yet know if it will prove feasible, given human abilities and resources.

I should read ahead. I agree with this.

My guess -- and it's only a guess -- is that consciousness will turn out to be a deceptively simple thing once we figure it out. And if I'm right, I hope to Chaos that I will be dead when we do.

Wouldn't you want to know?

But it's one thing to say that machine consciousness is possible in theory. It's quite another to propose that there can be a programming-only solution. Bodily functions can't be programmed, you can't program an artificial liver or an artificial heart, there has to be hardware designed to carry them out. So a conscious machine could certainly have a computer as a component, but at the moment it would appear that in order for a machine to produce a phenogram like the ones produced by our bodies, there's going to have to be some specialized hardware involved for that purpose.

Except we can talk about the differences between a simulated tornado and a real tornado. They're both accurate labels we've given particular phenomena.

Obviously my question is What is the difference between simulated consciousness and real consciousness?

I think a better question: If simulated consciousness is an accurate label of a phenomena, what on Earth is simulated consciousness?
 
Last edited:
I think it's actually possible for BOTH sides in this debate to be correct.

I agree with all of that, except for the conscious rope brain.

Ropes can't perform colors or odors or pain or pleasure or any such things.

You can't make a digestive system out of rope, nor can you make a conscious brain out of rope. It's just the wrong material.

But certainly in theory there's no reason to believe that machine consciousness is impossible. I say "in theory" because we don't yet know if it will prove feasible, given human abilities and resources.

My guess -- and it's only a guess -- is that consciousness will turn out to be a deceptively simple thing once we figure it out. And if I'm right, I hope to Chaos that I will be dead when we do.

But it's one thing to say that machine consciousness is possible in theory. It's quite another to propose that there can be a programming-only solution. Bodily functions can't be programmed, you can't program an artificial liver or an artificial heart, there has to be hardware designed to carry them out. So a conscious machine could certainly have a computer as a component, but at the moment it would appear that in order for a machine to produce a phenogram like the ones produced by our bodies, there's going to have to be some specialized hardware involved for that purpose.

I agree with you Piggy about the ropes, certainly. And I think it's unlikdely that a 'programming-only' solution exists. However I think it's plausible that it may turn out that a correctly designed program running on a sufficiently fast digital computer may yet be able to produce the necessary part of the bodily function(s) which the brain uses to produce consciousness. Just because the phenomenology is, as far as we know, always present in mammal brains when they are conscious, doesn't necessarily mean that the phenomenology is itself a necessary component of consciousness. What if it's role is literally just to be a distraction?f

Consider what a mammal, if it is to survive, actually uses its brain for other than producing the phenogram. In order to pull off all the myriad tricks that brains perform in the wild, they've got to have lot's and lot's of different sub-routines for a wide variety of purposes, all interconnected and able to cooperate with each other in flexible combinations of networks. Among all those sub-routines there has to be executive functioning (command and control) capacity that can exercise fine-grained control over all those different functions in any given situation.

This could all be achieved with a massively powerful command center with the capacity to maintain continuous 2-way communication with every one of the possible subroutines the brain uses from time to time. That would be a massively expensive option though.

What should a brain do if it wants to spare resources by maintaining a lean, mean highly agile command center that only has immediate contact between itself and a fraction of its 'crew' but switch focus between any combination of crew members as quickly as possible? You wouldn't want all the different networks of crew members actually turning on and off as needed becuase of the time lost in "booting up".

Wouldn't it make sense to have a set up where all the functions are constantly running all the time, but most are on some kind of autopilot thust keeping themselves busy until they get a call from the boss, and then they can just drop what they're doing and jump into action?

Suppose this is accomplished by having all the currently less needed sub routines keeping each other busy by generating this phenogram and chattering to each other about what they "see" and "hear" and "smell", while the command center is busily using the immediately vital networks to make important decisions. That would accord with the observation of a lag time between initial processing of sensory input and representation of that input in the phenogram. It seems to me it would also make sense that such a thing might evolve as a sort of a fitness windfall--a novel way to make use of some waste noise that a brain had lying around.

So instead of being the integral feature of consciousness that it appears to be, the phenogram is really just a cheap hack that enables an immensely complicated application suite to run on the absolute minimum required hardware. Doesn't this seem like just the sort of trick Evolution likes to play on its progeny?

Then a powerful enough digital computer should be able to perform consciousness without the need for a phenogram at all. It would just require better hardware than the cheap underpowered crap Evolution built for us.
 
Last edited:
I agree with you Piggy about the ropes, certainly.
You can't make a computer out of rope alone. The two requirements for a computer are switching elements and communication between switching elements. (And you need energy to drive the system, but that's true for any physical process.) We can certainly use ropes for the communication, but we'd need to have something else to allow for switching.

I think with ropes, simple fixed pulleys, and knots, you could do it, but it would be pretty fiddly. More complex pulleys would make it far easier to build and a lot more reliable.

(There's really not much difference between a rope computer and a fluidic computer, except that you pull on a rope and push on a fluid.)

And once you get to Turing Equivalence, that's it; there isn't anything more to do. And there are a lot of ways to get to Turing Equivalence.
 
I agree with you Piggy about the ropes, certainly. And I think it's unlikdely that a 'programming-only' solution exists. However I think it's plausible that it may turn out that a correctly designed program running on a sufficiently fast digital computer may yet be able to produce the necessary part of the bodily function(s) which the brain uses to produce consciousness. Just because the phenomenology is, as far as we know, always present in mammal brains when they are conscious, doesn't necessarily mean that the phenomenology is itself a necessary component of consciousness. What if it's role is literally just to be a distraction?f

Consider what a mammal, if it is to survive, actually uses its brain for other than producing the phenogram. In order to pull off all the myriad tricks that brains perform in the wild, they've got to have lot's and lot's of different sub-routines for a wide variety of purposes, all interconnected and able to cooperate with each other in flexible combinations of networks. Among all those sub-routines there has to be executive functioning (command and control) capacity that can exercise fine-grained control over all those different functions in any given situation.

This could all be achieved with a massively powerful command center with the capacity to maintain continuous 2-way communication with every one of the possible subroutines the brain uses from time to time. That would be a massively expensive option though.

What should a brain do if it wants to spare resources by maintaining a lean, mean highly agile command center that only has immediate contact between itself and a fraction of its 'crew' but switch focus between any combination of crew members as quickly as possible? You wouldn't want all the different networks of crew members actually turning on and off as needed becuase of the time lost in "booting up".

Wouldn't it make sense to have a set up where all the functions are constantly running all the time, but most are on some kind of autopilot thust keeping themselves busy until they get a call from the boss, and then they can just drop what they're doing and jump into action?

Suppose this is accomplished by having all the currently less needed sub routines keeping each other busy by generating this phenogram and chattering to each other about what they "see" and "hear" and "smell", while the command center is busily using the immediately vital networks to make important decisions. That would accord with the observation of a lag time between initial processing of sensory input and representation of that input in the phenogram. It seems to me it would also make sense that such a thing might evolve as a sort of a fitness windfall--a novel way to make use of some waste noise that a brain had lying around.

So instead of being the integral feature of consciousness that it appears to be, the phenogram is really just a cheap hack that enables an immensely complicated application suite to run on the absolute minimum required hardware. Doesn't this seem like just the sort of trick Evolution likes to play on its progeny?

Then a powerful enough digital computer should be able to perform consciousness without the need for a phenogram at all. It would just require better hardware than the cheap underpowered crap Evolution built for us.

Very informative post!

It is interesting to note that the "boss" tends to fall apart quickly when the subroutines are quiet. People put in a sensory deprivation tank go "nuts" in a fairly short amount of time.

For some reason, the "boss" needs constant feedback from the various subroutines, to the point where content will be invented (hallucinations) to keep the subroutines "chattering".
 
Ropes can't perform colors or odors or pain or pleasure or any such things.
What does this sentence even mean? Colours and stuff are the internal representation external stimuli, connected with representations of concepts and memories. Given that a rope computer has the appropriate external stimuli, why can it not have the same kind of internal representations?

I think that the word 'perform' in your post is a wishy-washy term that is used to hide a piece of dualism, probably without your express knowledge.
 
Last edited:
It's been a few years since I read Searle, but I think the point of the Chinese RoomWP is sometimes misinterpreted.
Probably.

Anyhoo, the problem I see with it is how it implies there's a magic bean of "understanding" which, BTW, dovetails with the Philosophical ZombieWP, a robot that acts exactly like a person (would pass any Turning Test) yet has a quale-free internal experience.
Exactly. Searle argues that although the Room provably understands Chinese since it can answer arbitrary questions in that language, it doesn't understand understand, because he can't locate the magic bean of understanding.

This is, of course, pure idiocy, and was pointed out immediately upon publication of the paper. Searle is still arguing, and his responses demonstrate that he doesn't understand (let alone understand understand) the flaws in his argument.
 
Probably.


Exactly. Searle argues that although the Room provably understands Chinese since it can answer arbitrary questions in that language, it doesn't understand understand, because he can't locate the magic bean of understanding.

This is, of course, pure idiocy, and was pointed out immediately upon publication of the paper. Searle is still arguing, and his responses demonstrate that he doesn't understand (let alone understand understand) the flaws in his argument.

A simple calculator can do that (answer arbitrary questions in a specific language). I don't think simple calculators understand anything though. You'd probably have to claim an abacus understands mathematical problems.
 
A simple calculator can do that (answer arbitrary questions in a specific language). I don't think simple calculators understand anything though. You'd probably have to claim an abacus understands mathematical problems.
A calculator can't answer arbitrary questions in a given language. It can answer specific questions within a defined problem space within a given language. Not the same at all.
 
A calculator can't answer arbitrary questions in a given language. It can answer specific questions within a defined problem space within a given language. Not the same at all.

Depends on how you mean "arbitrary"

"based on or determined by individual preference or convenience rather than by necessity or the intrinsic nature of something".
http://www.merriam-webster.com/dictionary/arbitrary

By that, I can ask the calculator arbitrary math problems. I can a person arbitrary questions purely in mathematical terms...if you agree that math is a language (I think you will).
 
Not the way you're using the term, no.

So, you don't see color? You don't have a sense of smell? You don't hear sounds? Because that's how I'm using the term, which, if you'd actually been reading all my long redundant posts -- which you say you've bothered to cut and paste together and measure for length, for some reason -- you should know.
 
Neurons can't either. A brain composed of nothing but neurons won't work.

Obviously, brain tissue does produce color and the rest of our phenomenology.

We know that phenomenology is a product of the brain.

We don't know how, but we know the brain does it.

Although you're correct, the neurons themselves cannot do this.

But the brain does. That is not in question.
 
Is that your big objection? You have no problem with machine consciousness (given a machine-brain that does all the functions that an organic brain does)?

In theory, there's no reason to object to synthetic machine consciousness, given the fact that biological machines can be conscious.
 
Wouldn't you want to know?

I would love to know. But I would not want for it to be generally known, because I don't have faith in the human species' ability to handle that knowledge well. In fact, I have a dystopian novel sketched out in my head based on a world after that discovery.


Except we can talk about the differences between a simulated tornado and a real tornado. They're both accurate labels we've given particular phenomena.

Obviously my question is What is the difference between simulated consciousness and real consciousness?

I think a better question: If simulated consciousness is an accurate label of a phenomena, what on Earth is simulated consciousness?

I don't particularly care about that last question, to be honest. I don't believe it's at all important to understanding actual consciousness.

I also don't care about those labels.

But I hope that we can agree that if you want to build a real-world instance of anything, you have to build something that does what the target system does in real spacetime.

A flight simulator doesn't get you to Paris, it just fools your mind.
 
I agree with you Piggy about the ropes, certainly. And I think it's unlikdely that a 'programming-only' solution exists. However I think it's plausible that it may turn out that a correctly designed program running on a sufficiently fast digital computer may yet be able to produce the necessary part of the bodily function(s) which the brain uses to produce consciousness.

I agree that speed is almost certainly a key component here. But that kills the scenario espoused by the pure informationalists, who hold that speed is irrelevant, and a conscious machine would be conscious at any operating speed, however slow.

Then a powerful enough digital computer should be able to perform consciousness without the need for a phenogram at all. It would just require better hardware than the cheap underpowered crap Evolution built for us.

As it stands, the phenogram simply is consciousness. When we study consciousness, that's what we're examining. So I can't go here with you, I'm afraid.
 
Status
Not open for further replies.

Back
Top Bottom