AkuManiMani
Illuminator
- Joined
- Jan 19, 2008
- Messages
- 3,089
Category error.
Red herring.
Non-sequitur.
Sometimes I wonder if your trite responses are some form of passive aggression or if you really are that stupid.
Category error.
Red herring.
Non-sequitur.
Once you agree that what I know -- which I suspect is more than I know about anything else -- about what we term as consciousness is nothing more than my public behavior, you are where Pixy & RD are. Thermostats, toasters, etc are also then conscious.
From what you know about the single point of consciousness you will ever be privy to, do you agree?
Red herring: The brain is not real-time. It is merely sufficiently fast for certain purposes. It manages this by a combination of ignoring most of the information presented to it, huge lookup tables that take decades to construct, and guesswork.
You need a better grade thermostat. Mine appears to have more compute power than an Apollo mission.Thermostats are not conscious. No self-reference.
Agreed, yet ... does that seem to adequately explain what you know of the single data point you term consciousness?Whether you want to say a thermostat has very rudimentary consciousness, or none at all is an arbitrary choice. For sure, they don't have the consciousness that we humans do.
My point is that wherever you draw the line (fuzzy as it may be), all you have to judge, and all you can judge is public behavior, or functionality.
Wait-- what? Do you honestly mean to tell me that you believe shining red light into a camera means that the color red is being seen? And just what the heck do you mean "I'm also running a simulation"? Just what are you supposed to be 'simulating', anyway?![]()
Agreed, yet ... does that seem to adequately explain what you know of the single data point you term consciousness?
I don't know what it really means that "the color red is being seen". I find the concept equally difficult to grasp whether it's in my own head, in your head, or in a computer. All I can do is evaluate the responses of the computer. If those match my own, then I don't see why I should not equate the internal experiences.
Do you think it's possible to program a computer to be a p-zombie ?
What I mean by running simulations is that the brain is making internal models of what it sees. If I see a red ball, some of my neurons are firing in such a way, that they represent some model of that ball. That's a partial simulation of reality. I can extend this simulation by imagining what happens if I try to reach for the ball and grab it.
I do not, however, think that p-zombies [i.e. non-conscious entities fundamentally indiscernible from conscious entities] are possible even in principle.
Because the internal experiences do not necessarily translate into a particular external response. There is no outward behavior that necessarily establishes that red is being seen or pain is being experienced. For example, a paralytic could be conscious and able to experience visual stimuli and pain but they are unable to articulate a motor or verbal response to those experiences. Likewise, an animatronic system could be programed to respond to a tactile stimulus by making a wincing facial expression but that does not mean that its actually experiencing pain or even that its capable of such.
So, you'll agree that a computer system that is indiscernible from conscious entities is therefore really conscious ?
It either A) posses it's own consciousness or B) it's controlled by another agent that is.
No, it doesn't. If you don't collect a typed character before the next character is typed, it is lost. That is not order dependence. That is time dependence.
I don't know why you are pretending that there is no distinction between Turing- and real-time programming.
It either A) posses it's own consciousness or B) it's controlled by another agent that is.
For the sake of the argument, we'll assume we can rule out option 'B'.
How do you think one would go about doing that in practice without knowing how to physically identify consciousness qua consciousness?
They state that it's an open empirical question whether M is true. IOW, M has not been confirmed. Note that the only which I highlighted is not part of the original quote. One has to be careful in reading these things.
And referring to M as if it were equivalent to Church-Turing is simply wrong. Stating that Church-Turing implies things that are implied only by M is simply wrong. Stating that M is demonstrably true is simply wrong.
The article is quite clear in showing how advocates of a particular AI viewpoint have misused Church-Turing. That's why a significant part of the article deals with the ways in which CT has been misinterpreted. One might hope that now that this has been explained, in detail, that the persistent claims that CT proves that Turing machines are sufficient to produce consciousness would be abandoned. Of course this won't happen. At least now we have a BS-marker.
Well, let's assume we can trust the designer not to put a little person inside, or have somebody remotely control the device. Just in case we doubt their motives, we can examine the design more carefully.
Assuming no tricks are being played, we just look at the behavior. If it's indistinguishable from a conscious person, we'll assume the device is conscious.
How do you think one would go about doing that in practice without knowing how to physically identify consciousness qua consciousness?
Assuming that it is conscious, the hypothetical designer should be able to tell us what it's experiencing and how similar or different it's experiences are from our own.
Assuming that it is conscious, the hypothetical designer should be able to tell us what it's experiencing and how similar or different it's experiences are from our own.