• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
Assuming that it is conscious, the hypothetical designer should be able to tell us what it's experiencing and how similar or different it's experiences are from our own.

Maybe the designer comes up with an explanation that you don't accept as being genuinely conscious, but allows you to interact with the device to see for yourself.
 
Assuming that it is conscious, the hypothetical designer should be able to tell us what it's experiencing and how similar or different it's experiences are from our own.

Only up to a point.

The final ingredient in any subjective experience is the act of being the thing experiencing, which can't be duplicated or shared in any way without ... well, becoming the thing in question.

Kind of like in Avatar, or something like that.

Thats just the issue here. The tricky part of consciousness isn't so much identifying behaviors associated with conscious entities but understanding what it means for something to have subjective experience. Whatever underlies our subjective experience its clearly based upon the physics of whatever our brains are doing. The computational aspect just determines how physical stimuli are filtered to the subject, they don't explain the capacity to experience them in the first place or tell us anything about the principles that determine the variation of experience.

If we can physically identify the exact physical process that is the -sufficient- indicator of experiencing things like "the redness of red", or some other sensation/emotion, we would have scientifically pinned down qualia. Combined with a rigorous theory of consciousness meeting the criteria I mentioned earlier, science will finally have an indisputable answer to all the Chalmers of the world.

Not necessarily. The design may have been made with a genetic algorithm, or any other kind of self-adaptative method. Or, maybe the designer just carefully copied somebody's brain structure into the machine, without knowing how it works.

The only issue is that none of those methods have so far produced anything that exhibits behavior indicative of volition. Even if a synthetic conscious system were eventually developed by accident we're still stuck with having to understand the specific mechanisms of how its produced.
 
Last edited:
Wait, wait wait ...

If we label the actions of typing a character as T1, T2, etc, and collecting the last typed character as C1, C2, etc, then a "correct" algorithm might be to type a character, collect it, type another, collect it, and so on, like this:

T1, C1, T2, C2, T3, C3 ... Tn, Cn.

Now, if -- granted, due to a timing error -- the sequence is disturbed, such that two characters are typed before collection, and therefore one character is lost, the sequence might be like this:

T1, C1, T2, T3, C2, T4, C3 ... Tn, C(n-1).

And you are going to honestly claim that even though the error is clearly caused by the sequence of operations being out of order the error isn't reducible to order-dependence?

Huh?

The data isn't "out of order". There is data missing. It's not part of the Turing paradigm to have missing data. It's not allowed for. Some essential elements of computing are dealt with by the Turing model. This is one of the things that aren't.
 
falkowsi said:
Agreed, yet ... does that seem to adequately explain what you know of the single data point you term consciousness?

I don't see how we can explain it better ...
Nor do I. Yet, are you fully happy your consciousness has been fully captured and described by your public behavior?

I'm not, I admit.

Sure, you can try looking in the brain, but how do you determine you're looking at the right place ?

Suppose you study the brain, and after many years of mapping and research, you've located a small area where you think 'consciousness' sits. You cut out the area, and if your hunch is right, the person no longer has consciousness.

After the procedure, you talk to your test subject, and no matter how long you talk about experiences, feelings, free will, emotions, whatever, you can't tell the difference.

What's the conclusion ? Wrong area, or right area, and the person is now a p-zombie ?
At death we will agree no consciousness remains. Prior to that I doubt brain dissections, or any other measurement, will ever fully answer the question.

ps. I'm not a good materialist, apparently. Conversely, the answer may just be too complex and ever-changing to be effectively analyzed.
 
The only issue is that none of those methods have so far produced anything that exhibits behavior indicative of volition. Even if a synthetic conscious system were eventually developed by accident we're still stuck with having to understand the specific mechanisms of how its produced.

I agree we've never produced anything close to human consciousness. I'm merely talking about hypothetical possbilities, based on what we know about Turing machines and physics.

Nevertheless, I think it's easier to come up with a conscious computer, than it is to understand how it actually works. And if you want to know how it works, it's easier to look at a computer than to look at a live human brain.
 
Nor do I. Yet, are you fully happy your consciousness has been fully captured and described by your public behavior?

I'm not, I admit.

I admit it doesn't feel like that, but the alternatives make even less sense (either the brain uses non-computable physics, or some other people might be p-zombies, but not me)

Also, from a biological evolution standpoint, public behavior is the only thing that ever matters. As far as evolution is concerned, p-zombies are good enough.

The inescapable conclusion is that this is what it feels like to be a p-zombie.
 
I agree we've never produced anything close to human consciousness. I'm merely talking about hypothetical possbilities, based on what we know about Turing machines and physics.

Nevertheless, I think it's easier to come up with a conscious computer, than it is to understand how it actually works. And if you want to know how it works, it's easier to look at a computer than to look at a live human brain.

LOL!

"If you want to understand consciousness don't study actual conscious brains, just build a computer and pretend its conscious." :p
 
LOL!
"If you want to understand consciousness don't study actual conscious brains, just build a computer and pretend its conscious." :p

The assumption is that we first agree that the computer is conscious by analyzing it's behavior, and then we examine how it works.

Laugh all you want, but I've yet to see a practical approach on how we can ever find "the redness of red" in a huge blob of neurons.
 
The data isn't "out of order". There is data missing. It's not part of the Turing paradigm to have missing data. It's not allowed for. Some essential elements of computing are dealt with by the Turing model. This is one of the things that aren't.

If the keypresses are part of the algorithm (not merely data), then the steps are out of order.

If the keypresses are merely data that the algorithm operates on, then there is no such thing as "missing" data, because data is only what is available for the algorithm to operate on. That is the definition of data.

So either the algorithm itself is disturbed because it went out of order, or the algorithm is just fine and dandy and there was simply an event in the external world that the algorithm won't respond to.

Both of these possibilities are perfectly compatible with the "Turing paradigm," thus your argument is wrong.
 
The assumption is that we first agree that the computer is conscious by analyzing it's behavior, and then we examine how it works.

Sorry but in science its generally a good idea to investigate the actual phenomena in question to gain understanding of it rather than make toys and pretend that they're the real object of study.

Laugh all you want, but I've yet to see a practical approach on how we can ever find "the redness of red" in a huge blob of neurons.

Well that "huge blob of neurons" happens to be the only unequivocal example of a system which produces conscious experience available to us. We're obliged to study it if we're to understand actual consciousness, let alone learn how to create it artificially.
 
Last edited:
What I'm saying is that whatever underlies our subjective experience its clearly based upon the physics of whatever our brains are doing.
I think all the thread participants understand that. We disagree on how significant the underlying physics are. I think the most significant thing about the underlying physical principles is that they are rich enough to build programmable information processing structures (neurons and neural nets, in this case). Once that critical threshold is hit, I think that the information processing capabilities become more interesting than the underlying physics.

The computational aspect just determines how physical stimuli are filtered to the subject, they don't explain the capacity to experience them in the first place or tell us anything about the principle that determine the variation of experience.
Are you making the claim that the computational aspect cannot explain the capacity to experience things subjectivly in principle, or merely that we do not know enough to frame an explanation that we can verify?

If we can physically identify the exact physical process that is the -sufficient- indicator of experiencing things like "the redness of red", or some other sensation/emotion, we would have scientifically pinned down qualia.

I think that the physical correlates of qualia will be significant to the degree that the physical correlates of the data representing the text I am typing in right now is significant. The physical correlates have to exist, and they have to be sufficient unto the task, but beyond that the more abstract information they correlate to is more interesting.

Combined with a rigorous theory of consciousness meeting the criteria I mentioned earlier, science will finally have an indisputable answer to all the Chalmers of the world.
We agree in some sense here, amazingly enough. :)
 
What I'm saying is that whatever underlies our subjective experience its clearly based upon the physics of whatever our brains are doing.

I think all the thread participants understand that. We disagree on how significant the underlying physics are. I think the most significant thing about the underlying physical principles is that they are rich enough to build programmable information processing structures (neurons and neural nets, in this case). Once that critical threshold is hit, I think that the information processing capabilities become more interesting than the underlying physics.

Information can quite literally take an infinite number of forms, and information processing is ubiquitous. In the case of consciousness however, we're not so much speaking of the -processing- of information but the -experience- of information. Computation just refers to the functional constraints imposed upon a given physical system but computation itself is not physics. Consciousness qua consciousness is not the result of brain computation, per se, but the physical interactions those computations are instantiated by.

The computational aspect just determines how physical stimuli are filtered to the subject, they don't explain the capacity to experience them in the first place or tell us anything about the principles that determine the variation of experience.

Are you making the claim that the computational aspect cannot explain the capacity to experience things subjectivly in principle, or merely that we do not know enough to frame an explanation that we can verify?

In principle, computation cannot explain subjective experience. At best, it describes how those experiences are organized.

If we can physically identify the exact physical process that is the -sufficient- indicator of experiencing things like "the redness of red", or some other sensation/emotion, we would have scientifically pinned down qualia.

I think that the physical correlates of qualia will be significant to the degree that the physical correlates of the data representing the text I am typing in right now is significant. The physical correlates have to exist, and they have to be sufficient unto the task, but beyond that the more abstract information they correlate to is more interesting.

Yes, I think we're on the same page here. Where I differ from you on this is that I'm cognizant of the fact that we -must- identify the sufficient physical correlates of qualia before we can implement artificial systems that we know, unequivocally, possess consciousness. Remember, the goal here is 'hardware' implementation of systems with conscious capacity. To accomplish that we need the physics down first.

Combined with a rigorous theory of consciousness meeting the criteria I mentioned earlier, science will finally have an indisputable answer to all the Chalmers of the world.

We agree in some sense here, amazingly enough. :)

Seriously, what better way to prove Chalmers wrong than to pin down the physics of consciousness and use that knowledge for practical applications? :cool:
 
Last edited:
Sorry but in science its generally a good idea to investigate the actual phenomena in question to gain understanding of it rather than make toys and pretend that they're the real object of study.

The best idea in science is the one that works. Using toys and models in order to test an idea is well established technique.

How do we even know there's an actual phenomenon to be found, or that we would recognize it if we saw it ? One way or another, you're going to have to split up "the redness of red" in millions of pieces, and assign each piece to a neuron. I don't find that any more intuitively appealing than assigning each piece to a memory location.
 
Computation just refers to the functional constraints imposed upon a given physical system but computation itself is not physics. Consciousness qua consciousness is not the result of brain computation, per se, but the physical interactions those computations are instantiated by.

I'm still not sure exactly where we disagree. Can you show which of the 4 points below you disagree with ?

1. Whatever the brain does, is computable. This means that the brain could in theory be faithfully simulated on a computer (large and fast enough).

2. We can hook up some I/O devices, like a sound and video system to this computer system, so we can communicate with the simulation in real-time. Consciousness does not reside in this I/O system.

3. If the computer, running on any kind of suitable hardware, produces behavior that is consistently indistinguishable from a conscious person, we'll say that the computer is conscious.

4. Computable systems can be mapped to an endless list of physical devices without changing functional results.
 
The best idea in science is the one that works. Using toys and models in order to test an idea is well established technique.

Thats all well & good but when you haven't even physically identified the thing you wish to model [in this case qualia] the model isn't going to give any insight. What you're proposing is akin to trying to model heredity without any empirical knowledge of how -real- heredity actually works. I'm sorry but you're flat-out wrong on this and theres no amount of argumentation you can present that can change that fact.

How do we even know there's an actual phenomenon to be found, or that we would recognize it if we saw it ? One way or another, you're going to have to split up "the redness of red" in millions of pieces, and assign each piece to a neuron. I don't find that any more intuitively appealing than assigning each piece to a memory location.

Scientists didn't have to break DNA up into "millions of pieces" in oder to physically identify it as the molecule of heredity nor did they "split up" the photon in order to identify it as the physical constituent of light. Just drop it already.
 
Computation just refers to the functional constraints imposed upon a given physical system but computation itself is not physics. Consciousness qua consciousness is not the result of brain computation, per se, but the physical interactions those computations are instantiated by.

I'm still not sure exactly where we disagree. Can you show which of the 4 points below you disagree with ?

Lets see...

1. Whatever the brain does, is computable. This means that the brain could in theory be faithfully simulated on a computer (large and fast enough).

To...

1. Whatever the [dynamo] does, is computable. This means that the [dynamo] could in theory be faithfully simulated on a computer (large and fast enough).

Uh-huh...

2. We can hook up some I/O devices, like a sound and video system to this computer system, so we can communicate with the simulation in real-time. Consciousness does not reside in this I/O system.

To...

2. We can hook up some I/O devices, like a sound and video system to this computer system, so we can communicate with the simulation in real-time. [Electrical power is not generated in] this I/O system.

Okay...

3. If the computer, running on any kind of suitable hardware, produces behavior that is consistently indistinguishable from a conscious person, we'll say that the computer is conscious.

To..

3. If the computer, running on any kind of suitable hardware, produces behavior that is consistently indistinguishable from a [running dynamo], we'll say that the computer is [an electrical generator].

And finally...

4. Computable systems can be mapped to an endless list of physical devices without changing functional results.

I'd have to say no.
 
Last edited:
If the keypresses are part of the algorithm (not merely data), then the steps are out of order.

If the keypresses are merely data that the algorithm operates on, then there is no such thing as "missing" data, because data is only what is available for the algorithm to operate on. That is the definition of data.

So either the algorithm itself is disturbed because it went out of order, or the algorithm is just fine and dandy and there was simply an event in the external world that the algorithm won't respond to.

Both of these possibilities are perfectly compatible with the "Turing paradigm," thus your argument is wrong.

The "Turing paradigm" simply doesn't deal with the situation. The Turing machine deals with the data it gets. There is no concept of missing or erroneous data.

The real-time paradigm is what we use to deal with the situation.
 
@akumanimani:

Nice trick to change all my words, but that's not an answer. How about treating each item individually, instead, and explain where the problem is.

1. Whatever the [dynamo] does, is computable. This means that the [dynamo] could in theory be faithfully simulated on a computer (large and fast enough).

No problem. The dynamo has several moving parts, magnetic fields, and produces a current in a wire. All these can be simulated accurately. The output would be a waveform of the produced current and voltage.

We can hook up some I/O devices, like a sound and video system to this computer system, so we can communicate with the simulation in real-time. [Electrical power is not generated in] this I/O system.

Obviously, a simple D/A converter with an amplifier can generate arbitrary waveforms of electrical power. Note that electrical power is generated.
 
The "Turing paradigm" simply doesn't deal with the situation. The Turing machine deals with the data it gets. There is no concept of missing or erroneous data.

The real-time paradigm is what we use to deal with the situation.

I don't see a distinction between the head+tape of an abstract Turing machine and the registers+memory of a real computer.

If data is on the tape, it can be part of the algorithm being implemented by the machine. If not, not. If data is in memory, it can be part of the algorithm being implemented by the CPU. If not, not.

What you are doing here is claiming that the abstract Turing machine is somehow equivalent to the entire computer, rather than just the CPU, while at the same time asserting that the keypresses aren't part of the algorithm run on the Turing machine.

That is just nonsense -- if the TM is equivalent to the entire computer, then the keypresses are steps in the algorithm rather than data and this is an order dependence problem. If the TM is only equivalent to the CPU, then your claim about missing/erroneous data is also wrong because the CPU doesn't have any concept of such a thing any more than a TM does.

So which is it? In which way do you want to be wrong this time?
 
@akumanimani:

Nice trick to change all my words, but that's not an answer. How about treating each item individually, instead, and explain where the problem is.

1. Whatever the [dynamo] does, is computable. This means that the [dynamo] could in theory be faithfully simulated on a computer (large and fast enough).

No problem. The dynamo has several moving parts, magnetic fields, and produces a current in a wire. All these can be simulated accurately. The output would be a waveform of the produced current and voltage.

Is your goal to -simulate- magnetic fields and electrical currents or -physically- generate them?

We can hook up some I/O devices, like a sound and video system to this computer system, so we can communicate with the simulation in real-time. [Electrical power is not generated in] this I/O system.

Obviously, a simple D/A converter with an amplifier can generate arbitrary waveforms of electrical power. Note that electrical power is generated.

Do you mean by the simulation or the actual physical hardware? Somehow I think you're still missing the point :rolleyes:
 
Last edited:

Back
Top Bottom