On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
No one in the discussion seems to care about the other mindproblem.
Science will never be able to be sure about other minds. (in computers, robots or maybe certain other bio-electrochemical processes in nature (who knows),...)
That's a limitation of what we can discover about conciousness.
See explanation: http://www.internationalskeptics.com/forums/showthread.php?t=259978

Did you even read Piggy's excellent post #4171? We can't ever know anything about anything other, be it mind or color or whatever. All you can ever know is how you react indirectly to particles that bounce off of you after they've bounced off of something else. Your other minds problem fits quite nicely inside this much grander conundrum.
 
Second, the experience is caused entirely by activity in your brain, which is a dark and confined place. You can't possibly be experiencing anything about, say, light or trees or cinnamon, because none of that stuff is in your brain, and the qualities of these things don't somehow magically "rub off" onto your brain when molecules and photons bounce off your body, and your experience is made entirely from brain activity.

Of course, some patterns and relationships must be preserved between your brain activity and those molecules and photons bouncing off of you and molecules bouncing around inside your body, or else the resulting phenogram would't work as a substitute navigation space for the outside world, but that happens in a kind of transfer between that sort of stuff and brain activity.
So what you're saying is that the brain uses sensory data to create a representative model of reality?
 
So what you're saying is that the brain uses sensory data to create a representative model of reality?

Close. I'm saying the brain creates a very specific kind of partially representative model of the world outside it based in part on what the body's sensory systems are doing.

If you get as vague as your statement there, you could be claiming that the brain writes flowcharts and PowerPoint presentations.
 
Close. I'm saying the brain creates a very specific kind of partially representative model of the world outside it based in part on what the body's sensory systems are doing.
Well, certainly. We know from both personal experience and carefully controlled experiment that the models our brains construct are limited and flawed. Of course, that's partly by definition - the whole point of a model is for it to be simpler and more accessible than the thing being modelled.
 
Well, certainly. We know from both personal experience and carefully controlled experiment that the models our brains construct are limited and flawed. Of course, that's partly by definition - the whole point of a model is for it to be simpler and more accessible than the thing being modelled.

True, but why do I suspect that you're ignoring the "very specific kind" qualifier here?
 
I think it's still conscious. ;)

:D

I hope my explanation of consciousness has elucidated the flaw in the choices originally offered by the OP:

• Consciousness is a kind of data processing and the brain is a machine that can be replicated in other substrates, such as general purpose computers.

• Consciousness requires a second substance outside the physical material world, currently undetectable by scientific instruments

Angrysoba was correct to point out in post #2 that we need a third option, which I would formulate like this:

• Consciousness is a bodily function which, in theory, should be performable by a properly designed and properly built machine, but which -- like all bodily functions -- can only be represented by computer modeling, although such virtual modeling does not cause the phenomenon to actually occur in spacetime (i.e. in reality). Running a computer sim of a brain will not cause the computer to become conscious for the same reason that running a computer sim of a tornado will not cause the computer to become damaged.
 
:D

I hope my explanation of consciousness has elucidated the flaw in the choices originally offered by the OP:

• Consciousness is a kind of data processing and the brain is a machine that can be replicated in other substrates, such as general purpose computers.

• Consciousness requires a second substance outside the physical material world, currently undetectable by scientific instruments

Angrysoba was correct to point out in post #2 that we need a third option, which I would formulate like this:

• Consciousness is a bodily function which, in theory, should be performable by a properly designed and properly built machine, but which -- like all bodily functions -- can only be represented by computer modeling, although such virtual modeling does not cause the phenomenon to actually occur in spacetime (i.e. in reality). Running a computer sim of a brain will not cause the computer to become conscious for the same reason that running a computer sim of a tornado will not cause the computer to become damaged.

I don't think that's right, because tornado damage is the output of a system.

Massimo PigliucciWP has compared consciousness to photosynthesis, erroneous in the same way as the tornado analogy.

How is consciousness an output?

(BTW this aspect has been discussed in this thread already IIRC.)
 
I don't think that's right, because tornado damage is the output of a system.

Massimo PigliucciWP has compared consciousness to photosynthesis, erroneous in the same way as the tornado analogy.

How is consciousness an output?

(BTW this aspect has been discussed in this thread already IIRC.)

I don't think you understand what I'm saying.

Is digestion an "output"? I don't see it that way.

It's a physical process, and the data-flow analogy simply does not apply.
 
Angrysoba was correct to point out in post #2 that we need a third option, which I would formulate like this:

• Consciousness is a bodily function which, in theory, should be performable by a properly designed and properly built machine, but which -- like all bodily functions -- can only be represented by computer modeling, although such virtual modeling does not cause the phenomenon to actually occur in spacetime (i.e. in reality). Running a computer sim of a brain will not cause the computer to become conscious for the same reason that running a computer sim of a tornado will not cause the computer to become damaged.

I'm not sure I agree. Information is information, within the system it's interacting with. People keep saying we have no way of knowing if we're in a simulation or not. Well, neither would an AI in a computer, if the thing were programmed that way.

I'd like to know what consciousness can possibly be that isn't magical but which is not information processing like your #1. If it's information processing, then it can be replicated on a machine. If not, then we need to explain what it is. It certainly seems to be information processing, wouldn't you agree ?
 
I don't think you understand what I'm saying.

Is digestion an "output"? I don't see it that way.

It's a physical process, and the data-flow analogy simply does not apply.

I've been out of this thread for a while, so I don't remember if this was brought up before, but it sounds like your position is very similar to Dennett's argument in "Why You Can't Make A Computer That Feels Pain". Is that right?
 
I've been out of this thread for a while, so I don't remember if this was brought up before, but it sounds like your position is very similar to Dennett's argument in "Why You Can't Make A Computer That Feels Pain". Is that right?

I agree with several things Dennett is saying there, but I still think he has missed the larger point.

He is correct in pointing out that a computer simulation of a hurricane doesn't result in anything getting wet, and does not confer upon the computer any actual wind speed.

But let's take this example, or rather a similar example of a tornado, and look at it from a systems theory point of view.

It's possible to build a tornado box in which real-world models of tornadoes can be produced. It's also possible to simulate a tornado on a computer. (For our purposes here, we'll posit a stipulative definition of a "model" as a real-world functional replica, and "simulation" as a virtual computer rendering of the phenomenon.)

What's the primary difference here?

Well, the model tornado can destroy real-world objects in its path. And it can do this whether or not anybody is there to observe it.

In other words, the model tornado is capable of existing in its entirety within a system that contains only the tornado itself. It can "be a tornado" on its own.

Let's compare that to a computer simulation.

In that case, if we remove any observer who knows how to interpret the output of the computer, and leave the machine alone in a system by itself, there is nothing there which resembles a tornado at all. It's just a computer doing what computers do, changing electronic states.

The simulation only exists in a system which includes at least one brain to act as programmer and interpreter. Someone has to be able to understand that the outputs of the machine -- which may be pixels displayed on a screen, sound coming from speakers, printouts on paper, and such -- are intended to represent a tornado, or someone has to be observing who has the right kind of brain which can be fooled into "seeing a tornado" by the non-tornado-like output. And even so, this cannot be achieved by programming alone, but requires special hardware such as a display, printer, or speakers.

So the simulated tornado has no independent existence in the real world of spacetime, matter, and energy. The only thing existing in that realm is a computer behaving in essentially the same way as a computer simulating anything else.

This means that computer simulations, in order to be simulations, demand a system which incorporates 3rd party observers, and that the simulation doesn't actually happen in the computer at all, but resides in the interaction between the computer and an observer. Remove the outside observer and the simulation simply doesn't exist.

(This is why it's not possible that we exist inside a simulation. If our universe is a machine producing a simulation, we would have no way of knowing what sort of thing the universe is meant to be simulating to the beings running it, just as a sentient microbe inside a computer running a simulation program would have no way of knowing what the computer was supposed to be simulating for its observers… it would only perceive the mechanical workings of the machine itself.)

My body is performing a phenogram when it's conscious, regardless of whether anyone else is around to be part of a system with it.

Therefore it's not a simulation, and one cannot simply program a general purpose computer to do the same thing virtually and expect an actual phenogram to be the result in actual spacetime.

If we want a machine to generate a phenogram, we're going to have to build the proper hardware to make that happen. A computer might well be one of the components of such a machine, but a programming-only solution to producing an actual phenogram in the real world simply is not possible.
 
I'm not sure I agree. Information is information, within the system it's interacting with. People keep saying we have no way of knowing if we're in a simulation or not. Well, neither would an AI in a computer, if the thing were programmed that way.

I'd like to know what consciousness can possibly be that isn't magical but which is not information processing like your #1. If it's information processing, then it can be replicated on a machine. If not, then we need to explain what it is. It certainly seems to be information processing, wouldn't you agree ?

Information processing is a very loose and ill-defined term. There are many definitions of information, and all of them are problematic. But they all have one common feature -- information is an abstraction, a measurement.

There is no independent "information" in the real world, just as there are no independent kilometers in the real world.

Information is always a measurement or abstraction of something else which is not information.
 
• Consciousness is a bodily function which, in theory, should be performable by a properly designed and properly built machine, but which -- like all bodily functions -- can only be represented by computer modeling, although such virtual modeling does not cause the phenomenon to actually occur in spacetime (i.e. in reality). Running a computer sim of a brain will not cause the computer to become conscious for the same reason that running a computer sim of a tornado will not cause the computer to become damaged.
When we leave aside the false analogy with tornadoes, what is the difference between this and:
Consciousness requires a second substance outside the physical material world, currently undetectable by scientific instruments
 
Maybe there's an emotional hangup with the phrase "data processing."

Brain cells are switches. Mechanical switches can be made that do the same kinds of switching that brain cells do. Connect enough of these mechanical switches together in the right way, and it will perform consciousness. How would we now it is? By its self-reportage that it's experiencing consciousness. Until it reports this to us, the consciousness that it's experiencing is completely internal to its process. It doesn't produce anything like photosynthetic sugar or tornado damage. The consciousness it experiences never leaves the realm of the abstractions in its switches and signals, until it's reported to us, and that report is data (information) and not at all like sugar or physical tornado effects because data is substrate independent.

If this is wrong, Piggy, explain to me how you know it's wrong.
 
When we leave aside the false analogy with tornadoes, what is the difference between this and:

I don't know of any machines which are undetectable or which exist outside the material world.

That's the difference.
 
What is the difference between a poem and a simulated poem, Piggy?

A poem is another example of the kind of thing which exists only in a system involving an object and an interpreting mind.

The piece of paper by itself is simply a piece of paper with ink on it.

So all poems would properly fall into that same large class as computer simulations, and outside the class I'm calling models here.

In other words, a poem about a lion is not a lion.

Therefore, comparing consciousness to a poem makes no sense.
 
Maybe there's an emotional hangup with the phrase "data processing."

Brain cells are switches. Mechanical switches can be made that do the same kinds of switching that brain cells do. Connect enough of these mechanical switches together in the right way, and it will perform consciousness. How would we now it is? By its self-reportage that it's experiencing consciousness. Until it reports this to us, the consciousness that it's experiencing is completely internal to its process. It doesn't produce anything like photosynthetic sugar or tornado damage. The consciousness it experiences never leaves the realm of the abstractions in its switches and signals, until it's reported to us, and that report is data (information) and not at all like sugar or physical tornado effects because data is substrate independent.

If this is wrong, Piggy, explain to me how you know it's wrong.

If that system of physical switches can produce the same real phenomena in spacetime which are the signature features of conscious activity in the brain (see my post on what consciousness is, upthread) then we've got consciousness.

But no, "self-reportage" is not enough.

You can rig up an unconscious system to report that it is conscious, or to report what we would expect, say, a human brain to report under certain conditions (e.g. "I'm seeing a green light"). But that's trivial.

The goal, or one of the goals, of current research is to allow us to determine the presence of consciousness without relying on reports.

This has been done in visual tests using monkeys, for example. The monkeys are put into a situation of binocular ambiguity, where 2 different visual cues are presented simultaneously.

Experiments on people show that when this is done, the non-conscious brain "processes" both images, but the phenogram can only incorporate one at a time, so it tends to switch back and forth between a conscious experience of one or the other, depending on which image the non-conscious brain finds easier or better to work with.

The monkeys are trained to respond to visual cues for food, and the simultaneous pair of images are shown in a split-vision test, so that both images cause activity in the non-conscious brain, but only one of those images is a "food" cue to the monkey.

That way, by observing the food-obtaining behavior of the monkey, we can know which of the 2 images is being incorporated into the phenogram at any given moment.

Of course, the ultimate goal is to sufficiently describe the brain activity itself, and somehow be able to observe it, so that we can know by observation of brain activity alone what's going on consciously. But we're nowhere near that right now.

But let's be clear, consciousness is not an "abstraction". Again, please re-read my post defining consciousness.
 
Status
Not open for further replies.

Back
Top Bottom