• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

This is basically just a rephrased and elaborated version of what I've already said:
No it's not. Reread it.
[Notice that what you call "particular pieces" I referred to as "basic building blocks"]
Nope. It's a complete mismatch. The picture I'm painting is entirely different than the picture you're painting.

You have this "conscious part" that gets basic raw drives, and works on them like it's a factory, grinding it, refining it, cooking it, baking it, slicing it, etc... and eventually that conscious factory took these "raw good goals" and consciously produced a conscious goal. You're calling the raw goods being brought into the factory the "basic building blocks".

I'm pointing out that the factory as a whole doesn't grind--just this one part. It doesn't refine--just this one part. Doesn't cook--just that part. Doesn't bake, just that part. Doesn't slice--just that part. The entirety--every single last drop of goal making--is done only by a tiny part of the factory working in its own corner passing things to another tiny part of the factory working in its corner.

When I say a part of you, I mean a divisible module that is a reduced portion of who you are--a piece of your psyche. I do not mean a smaller separate more primitive layer--I mean a smaller conjoined integrated portion. A subcomponent of the complex thing you abbreviate as consciously aware that, were it alone, you would not consider conscious awareness. And I assert that the entirety of what you call conscious awareness is made up of these pieces--they just sling products at each other.

Consider again the canonical example of how you select the individual words that compose the sentences you produce. You know generally what you want to say, and you don't seem to produce the entire sentence at once--you only seem to produce one word at a time. But somehow this thing comes out grammatically correct, chained together in a larger order, and winds up communicating what you intended to communicate... it does so magically. You're not aware of how that is done.

There's a part of you that generates the sentence.
 
Last edited:
If thermostats are conscious, should they be given moral rights? If a brand of thermostat is about to be destroyed entirely, should we make efforts to prevent its extinction like with The Endangered Species Act?

Get out of here, troll.
 
Interesting.
The question then is how do we recognize a human?
Do we need to a conscious being or a conscious human to recognise a human?
Is a corpse still a human?
If so when is a corpse no longer a human?

A corpse is the corpse of a human, a human corpse. It is a combination of a human skull, human organs, human blood and other parts.
 
Let me just remind people of a point of information. The reason it's called the "Turing test" rather than the "Westprog test" is because I didn't make it up. In fact, it's older than I am. So the people who find it so objectionable should take that up with the people who find it the final word in the matter. I do not.

However, it's interesting that the argument has developed now to the point where the proponents of computer consciousness don't think that any test is necessary. Proving that consciousness exists in computers is now unnecessary. We don't need to identify or define consciousness, we don't need to test it at all - we just affirm its presence.

I don't find it objectionable. I think it is useful for testing artificial human intelligence.

And I don't know that all computers should be said to be conscious, or where we should draw the line for that term. But requiring that something be able to communicate like a human, and do it so well that other humans are convinced that it is a human, is a nonsensical requirement for something to be recognized as aware of its self.
 
I hardly think so. I regard the test for consciousness as being ad hoc, but remarkably reliable, as applied to human beings. We can distinguish a conscious from an unconscious human, or a wax figure, remarkably easily. We can do it down a phone line. We can do it on this forum. What is being proposed is that this reliable test be abandoned and not replaced with anything at all. Well, replaced with an assertion that "Our computers are conscious right now".

The test works remarkably well for humans, yes. Why would you automatically think it should translate to detecting other non human minds?

Oh, that's right, if it's not human, it's not a mind.
 
A corpse is the corpse of a human, a human corpse. It is a combination of a human skull, human organs, human blood and other parts.

Well yes, whilst it is visually recognizable as such.
We would also recognize a photo of human, but this is just related to visual recognition.
A blind person would not recognize a human corpse.
That is not really the question I meant to ask.
The question is what all is required to recognise a human (conscious, unconscious, corpse)?
Is it another conscious human or just a conscious thing?
What is it about consciousness that helps us to recognise another human?
 
Well yes, whilst it is visually recognizable as such.
We would also recognize a photo of human, but this is just related to visual recognition.
A blind person would not recognize a human corpse.
That is not really the question I meant to ask.
The question is what all is required to recognise a human (conscious, unconscious, corpse)?
Is it another conscious human or just a conscious thing?
What is it about consciousness that helps us to recognise another human?

Because a human is aware of me and reacts to my presence in a way that I have become used to from other humans.

If I came across someone who was just standing still doing nothing (but was obviously alive, breathing and such) and did not react to me saying hello, waving, poking at them, I would assume that they were unconscious, in some sort of trance perhaps. If humans behave like they are aware of their surroundings, you assume they are. I don't see why the same standard shouldn't apply to non humans.
 
Well yes, whilst it is visually recognizable as such.
We would also recognize a photo of human, but this is just related to visual recognition.
A blind person would not recognize a human corpse.
That is not really the question I meant to ask.
The question is what all is required to recognise a human (conscious, unconscious, corpse)?
Is it another conscious human or just a conscious thing?
What is it about consciousness that helps us to recognise another human?

I would say it is associative learning and pattern recognition, parts of the rubric of consciousness.
 
Whats the "working assumption" you're referring to? That the world we observe is real or that our experience of it is real?

I think the working assumption is supposed to be that we live in a real world that works pretty much the way we think, but our experience of it is an illusion.
 
Thats a logical possibility, but if it were true we'd be left with the problem of explaining why it evolved to begin with. It takes energy to maintain conscious states, and in humans it takes up proportionately more energy that other critters. If its really that superfluous it seems like it would have been selected against.

The Strong AI view is that it's something that just happens whenever a handful of neurons are gathered together in a network - so any kind of intelligence will just be conscious regardless.

I agree with your view that consciousness probably has evolutionary benefits. The lesson of this endless discussion is that we have to be careful as to what is probable, what is logically impossible, what is proven, what is certain and what is possible.

Of course, there are a number of true believers, and they will continue to say that they have the unique answer, and will decry the heretics. All the more reason to be very careful. I've come to realise that the big difference is not between what people believe (which isn't that important) but in what they claim to be certainly true.
 
Who's deriding the Turing test besides you guys ?

So, yes. It's almost inconceivable to talk about consciousness without reference to humans, but the only thing we know about human consciousness is the associated set of behaviours.

That's exactly why Turing (who thought about these things quite deeply) selected his test in the way he did.

Since we can design computers, if computer consciousness exists, we can design it to be the kind of consciousness we can test for. We only have the one test for consciousness. Therefore that is the test which we should apply.

Once we have programs which can pass the test, and others which can't, we can start to investigate what the differences are between them. Extract properties and create a theory. Then it should be possible to create computer consciousness and claim that it's conscious even if it doesn't pass the Turing test.

But in the meantime, when no computer exhibits conscious behaviour, we're left with the Pixy assertion that all computers are conscious. What use is that statement in practice?
 
If humans behave like they are aware of their surroundings, you assume they are. I don't see why the same standard shouldn't apply to non humans.

All inanimate objects react to their surroundings. They can't not react to their surroundings. Indeed, if an object were found that didn't react to its environment, then I'd tend to assume that it was at least intelligent, and possibly an angel.
 
But in the meantime, when no computer exhibits conscious behaviour, we're left with the Pixy assertion that all computers are conscious. What use is that statement in practice?

Well, if we are speaking of pacticality, I find that individuals who are sure humans are the only conscious creatures are typically a little frightening. They tend to have an outlook on life that scares me, with behavior to match.

So if nothing else it would be nice to just educate people with the possibility that other things besides themselves and their monkeysphere might have feelings.
 
[Consciousness] may just be an annoying side-effect.

[...]

If it's just a side-effect of the complexities of the human computing power, then it would be irrelevant to selection.

Even assuming consciousness is a "side-effect" I wouldn't consider it annoying. Its what allows us to actually live and experience the world.
 
I think the working assumption is supposed to be that we live in a real world that works pretty much the way we think, but our experience of it is an illusion.

I hope thats not what he meant. If our experience of the word is accurate then, by definition, its not an illusion.
 
The picture I'm painting is entirely different than the picture you're painting.

You have this "conscious part" that gets basic raw drives, and works on them like it's a factory, grinding it, refining it, cooking it, baking it, slicing it, etc... and eventually that conscious factory took these "raw good goals" and consciously produced a conscious goal. You're calling the raw goods being brought into the factory the "basic building blocks".

I'm pointing out that the factory as a whole doesn't grind--just this one part. It doesn't refine--just this one part. Doesn't cook--just that part. Doesn't bake, just that part. Doesn't slice--just that part. The entirety--every single last drop of goal making--is done only by a tiny part of the factory working in its own corner passing things to another tiny part of the factory working in its corner.

When I say a part of you, I mean a divisible module that is a reduced portion of who you are--a piece of your psyche. I do not mean a smaller separate more primitive layer--I mean a smaller conjoined integrated portion. A subcomponent of the complex thing you abbreviate as consciously aware that, were it alone, you would not consider conscious awareness. And I assert that the entirety of what you call conscious awareness is made up of these pieces--they just sling products at each other.

Consider again the canonical example of how you select the individual words that compose the sentences you produce. You know generally what you want to say, and you don't seem to produce the entire sentence at once--you only seem to produce one word at a time. But somehow this thing comes out grammatically correct, chained together in a larger order, and winds up communicating what you intended to communicate... it does so magically. You're not aware of how that is done.

There's a part of you that generates the sentence.

Okay, then what role do you think consciousness play in all of this? Does our subjective experience play some causal role in how we interract with the world, or do you suspect that consciousness is an irrelevant epiphenomenon with no functional value IAOI?
 
I was not so much referring to ducking, but more to the fact that a fist is solid. I would not call realizing the solidity of a fist a habit.
Well, if you study human babies you will find that they lack an innate fear of most things, they do startle in response to loud noise, but they have no 'natural fear' of most things. So construct that are intuitive, or 'ingrained' are often and usually learned. Such as a fear of height or the response of ducking. These things are learned and conditioned.

So regardless of calling it a habit, conditioning, learned response, it is not innate to humans, and it is learned.

So again we have the triumvirate of personal experience, social conditioning and culture, we learn that 'things' have 'solidity', we are socially conditioned in talking and other's responses and then there is the cultural framework "solid=unchanging'. We are complex to say the least.
Sure, being a forum for skeptics I would expect this to be the case. However implicit in the questioning of common sense is to come up with a more reliable common sense.
Of course solipsism is always the possible outcome for a sceptic, but could hardly be a philosophy embraced by a scientist.
the use of common sense probably has a language barrier, here in the US it often denotes any commonly held belief.

I personally try to avoid its usage, instead referring to the rationale behind it. Solipsism is unavoidable, but then it does not appear to be true.
This is the case were consequences are merely reported. I tend to be more interested in the physical results of introspection, in which case they speak for themselves.
I can't say I follow you here, the physical impacts? certainly not the poorly researched one. So maybe an example?
That may be especially when it comes to verbal communication of the results of introspection. However we need not and in fact mostly do not communicate introspection solely by rational means. We mostly do it through artistic expression. The question then is how much of this expression conveys something meaningful. The best artists tend to convey a common meaning better.
certainly but often the meaning is subjective to the viewer and the responses to artistic expression will vary widely. Any discussion of art will lead you to different conclusions for different viewers. For example in Moby Dick there is a chapter devoted to Starbuck and his courage. I believe that he has courage and it is not a failure of courage that leads him to not challenge Ahab, this is hotly debated. Or in Wouk's Caine Mutiny there is a movie version and a made for TV version, in one Queeg is almost craven, in the other he is not.
Then the question remains, without a common objective measure of colour experience, will Mary experience beauty the same way someone with color perception does?
Now that is a great question, I think that Mary can learn to appreciate the values ascribed to beauty, but she will not ascribe them for the same reasons.
Hmm, surely there are cases similar to Mary were we would have recorded a physiological response to a stimulus of which a person is unaware?



I am not sure I follow, are you agreeing that her brain receives the same signals as someone with colour vision or are you saying that her physiological response before the brain would be differently to colours than someone with colour vision?
This is the question that would be unethical to answer.

There are multiple layers to color perception, the photoreceptors, the nerves and network of the retina, the optic nerve, the processing by the visual cortex.

I do not know if her photoreceptors for the longer wave lengths will be active, they likely will be, but she will not have developed )very likely) the retinal structure associated with color vision for the loner wave lengths, especially those where they are contrasted to other photo receptors, will the optic nerves be developed to carry the new signals (unknown) and do they have the capacity to learn them (probably), but the visual cortex and developed and has not had the input from the long wave receptors, so it is likely she will never perceive the 'color red' the way that someone with full color vision would.

But she might, however there does seem to be the issue of developmental cut offs for so many things. She may not.
I am not so sure that it is not relevant. Mary's awareness of colour is not just a matter of her physiology, but also her ability to abstract one from the other. If she has no cultural references to the differences in colour, why would we expect her to abstract one colour from the other just because her physiology can?
That is unrelated to the complete knowledge issue, which is the fallacy of construction I was referring to.
 
Well, if we are speaking of pacticality, I find that individuals who are sure humans are the only conscious creatures are typically a little frightening. They tend to have an outlook on life that scares me, with behavior to match.

So if nothing else it would be nice to just educate people with the possibility that other things besides themselves and their monkeysphere might have feelings.

It's fairly clear who on this thread is trading in possibility, and who is trading in certainty.
 

Back
Top Bottom