• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
Once you agree that what I know -- which I suspect is more than I know about anything else -- about what we term as consciousness is nothing more than my public behavior, you are where Pixy & RD are. Thermostats, toasters, etc are also then conscious.

From what you know about the single point of consciousness you will ever be privy to, do you agree?

Whether you want to say a thermostat has very rudimentary consciousness, or none at all is an arbitrary choice. For sure, they don't have the consciousness that we humans do.

My point is that wherever you draw the line (fuzzy as it may be), all you have to judge, and all you can judge is public behavior, or functionality.
 
Red herring: The brain is not real-time. It is merely sufficiently fast for certain purposes. It manages this by a combination of ignoring most of the information presented to it, huge lookup tables that take decades to construct, and guesswork.

OK, I'll leave aside the usual stuff and concentrate on the one obvious point:

All real time processing is "sufficiently fast for certain purposes". In the case of controlling a water works, for example, a response time of a few seconds is fine. In other situations, a response time of microseconds might be needed. It's never instantaneous, and there's always a margin for error.

What this means is that human beings are quick enough to catch a ball, but too slow to dodge a bullet. The abilities of the brain to control the body are attuned to the physical capabilities of the body, pretty much.

The essential element is that human beings can, for example, juggle in real time, with real objects. They can also think about juggling. The two things are of course not the same.
 
Whether you want to say a thermostat has very rudimentary consciousness, or none at all is an arbitrary choice. For sure, they don't have the consciousness that we humans do.

My point is that wherever you draw the line (fuzzy as it may be), all you have to judge, and all you can judge is public behavior, or functionality.
Agreed, yet ... does that seem to adequately explain what you know of the single data point you term consciousness?

That imo is a representation of HPC.
 
Wait-- what? Do you honestly mean to tell me that you believe shining red light into a camera means that the color red is being seen? And just what the heck do you mean "I'm also running a simulation"? Just what are you supposed to be 'simulating', anyway? :confused:

I don't know what it really means that "the color red is being seen". I find the concept equally difficult to grasp whether it's in my own head, in your head, or in a computer. All I can do is evaluate the responses of the computer. If those match my own, then I don't see why I should not equate the internal experiences.

Do you think it's possible to program a computer to be a p-zombie ?

What I mean by running simulations is that the brain is making internal models of what it sees. If I see a red ball, some of my neurons are firing in such a way, that they represent some model of that ball. That's a partial simulation of reality. I can extend this simulation by imagining what happens if I try to reach for the ball and grab it.
 
Agreed, yet ... does that seem to adequately explain what you know of the single data point you term consciousness?

I don't see how we can explain it better ...

Sure, you can try looking in the brain, but how do you determine you're looking at the right place ?

Suppose you study the brain, and after many years of mapping and research, you've located a small area where you think 'consciousness' sits. You cut out the area, and if your hunch is right, the person no longer has consciousness.

After the procedure, you talk to your test subject, and no matter how long you talk about experiences, feelings, free will, emotions, whatever, you can't tell the difference.

What's the conclusion ? Wrong area, or right area, and the person is now a p-zombie ?
 
I don't know what it really means that "the color red is being seen". I find the concept equally difficult to grasp whether it's in my own head, in your head, or in a computer. All I can do is evaluate the responses of the computer. If those match my own, then I don't see why I should not equate the internal experiences.

Because the internal experiences do not necessarily translate into a particular external response. There is no outward behavior that necessarily establishes that red is being seen or pain is being experienced. For example, a paralytic could be conscious and able to experience visual stimuli and pain but they are unable to articulate a motor or verbal response to those experiences. Likewise, an animatronic system could be programed to respond to a tactile stimulus by making a wincing facial expression but that does not mean that its actually experiencing pain or even that its capable of such.

Do you think it's possible to program a computer to be a p-zombie ?

I think it's possible to fool a person into believing that a non-conscious system is conscious. I do not, however, think that p-zombies [i.e. non-conscious entities fundamentally indiscernible from conscious entities] are possible even in principle.

What I mean by running simulations is that the brain is making internal models of what it sees. If I see a red ball, some of my neurons are firing in such a way, that they represent some model of that ball. That's a partial simulation of reality. I can extend this simulation by imagining what happens if I try to reach for the ball and grab it.

Having a complex stimulus-response mechanism does not establish that the system in question has any subjective experience of it's stimuli.
 
I do not, however, think that p-zombies [i.e. non-conscious entities fundamentally indiscernible from conscious entities] are possible even in principle.

So, you'll agree that a computer system that is indiscernible from conscious entities is therefore really conscious ?
 
Because the internal experiences do not necessarily translate into a particular external response. There is no outward behavior that necessarily establishes that red is being seen or pain is being experienced. For example, a paralytic could be conscious and able to experience visual stimuli and pain but they are unable to articulate a motor or verbal response to those experiences. Likewise, an animatronic system could be programed to respond to a tactile stimulus by making a wincing facial expression but that does not mean that its actually experiencing pain or even that its capable of such.

The computer system will have to be equipped with a reasonably full set of I/O devices, in order for us to probe the inner workings. A speaker and microphone would be essential to have a meaningful discussion about the feeling of pain, for instance.
 
No, it doesn't. If you don't collect a typed character before the next character is typed, it is lost. That is not order dependence. That is time dependence.

I don't know why you are pretending that there is no distinction between Turing- and real-time programming.

Wait, wait wait ...

If we label the actions of typing a character as T1, T2, etc, and collecting the last typed character as C1, C2, etc, then a "correct" algorithm might be to type a character, collect it, type another, collect it, and so on, like this:

T1, C1, T2, C2, T3, C3 ... Tn, Cn.

Now, if -- granted, due to a timing error -- the sequence is disturbed, such that two characters are typed before collection, and therefore one character is lost, the sequence might be like this:

T1, C1, T2, T3, C2, T4, C3 ... Tn, C(n-1).

And you are going to honestly claim that even though the error is clearly caused by the sequence of operations being out of order the error isn't reducible to order-dependence?

Huh?
 
How do you think one would go about doing that in practice without knowing how to physically identify consciousness qua consciousness?

Well, let's assume we can trust the designer not to put a little person inside, or have somebody remotely control the device. Just in case we doubt their motives, we can examine the design more carefully.

Assuming no tricks are being played, we just look at the behavior. If it's indistinguishable from a conscious person, we'll assume the device is conscious.
 
They state that it's an open empirical question whether M is true. IOW, M has not been confirmed. Note that the only which I highlighted is not part of the original quote. One has to be careful in reading these things.

They state that it is an open empirical question because one can conceive of machines that might invalidate M. They also state that such machines would not obey the currently understood laws of our universe.

I can also say that the thesis "a dropped apple will accelerate towards the center of the Earth" is an "open empirical question" because I can conceive of an apple not doing so.

Such a thing is utter stupidity, though, because all the inductive evidence we have thus far supports the law of gravity.

Just as -- no surprise -- all the inductive evidence we have thus far supports M.

And referring to M as if it were equivalent to Church-Turing is simply wrong. Stating that Church-Turing implies things that are implied only by M is simply wrong. Stating that M is demonstrably true is simply wrong.

M is simply an extension of CT with "common sense" factored in.

In other words, if you start with CT, and observe that the formal definition of "effective method" covers every single process ever observed in reality, and roll that in, you end up with M.

So yes, it is incorrect to say that M is indisputably true, because we have not exhaustively enumerated every single process in our reality to make sure they are all covered by "effective method."

But if you are going to argue against M on that basis, you might as well become a theist since "nobody can prove God don't exist" either. Oops, I forgot -- you are a theist. Hmmm... coincidence????

The article is quite clear in showing how advocates of a particular AI viewpoint have misused Church-Turing. That's why a significant part of the article deals with the ways in which CT has been misinterpreted. One might hope that now that this has been explained, in detail, that the persistent claims that CT proves that Turing machines are sufficient to produce consciousness would be abandoned. Of course this won't happen. At least now we have a BS-marker.

No, it is not clear at all, and it isn't a "BS-marker" because the article is clearly biased against the computational model (or at least playing devil's advocate).

Because the only "misuse" of CT that one could gleam from that article is the fact that said advocates of a particular viewpoint have made the (according to the article) invalid assumption that just because every single darned process we have ever observed satisfies the notion of an "effective method" it doesn't mean consciousness will as well.

And I am sorry to tell you, but that is a pretty lame criticism. "Yeah, your model makes sense if there isn't magic involved, but that is only because you haven't accounted for magic, so what if magic exists?"

Gimme a break. Seriously.
 
Well, let's assume we can trust the designer not to put a little person inside, or have somebody remotely control the device. Just in case we doubt their motives, we can examine the design more carefully.

Assuming no tricks are being played, we just look at the behavior. If it's indistinguishable from a conscious person, we'll assume the device is conscious.

Assuming that it is conscious, the hypothetical designer should be able to tell us what it's experiencing and how similar or different it's experiences are from our own.
 
How do you think one would go about doing that in practice without knowing how to physically identify consciousness qua consciousness?

Falkowski has a very good point, Aku, and frankly I think most of you on the other side of this debate categorically ignore it.

The whole chinese-room and/or external controller argument just doesn't work because it is pretty clear when a system is behaving on its own accord or not.

For example, if we wire up a machine to learn stuff and then modify its behavior accordingly, and eventually it starts talking to us like another human, we are pretty darn sure that there is no chinese-room or external controller responsible. I mean, if we are the ones that programmed the thing to begin with, and it is doing stuff we never programmed it to do ...
 
Assuming that it is conscious, the hypothetical designer should be able to tell us what it's experiencing and how similar or different it's experiences are from our own.

Only up to a point.

The final ingredient in any subjective experience is the act of being the thing experiencing, which can't be duplicated or shared in any way without ... well, becoming the thing in question.

Kind of like in Avatar, or something like that.
 
Assuming that it is conscious, the hypothetical designer should be able to tell us what it's experiencing and how similar or different it's experiences are from our own.

Not necessarily. The design may have been made with a genetic algorithm, or any other kind of self-adaptative method. Or, maybe the designer just carefully copied somebody's brain structure into the machine, without knowing how it works.
 

Back
Top Bottom