• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
"Gibberish"

You should at least ask someone to explain what point they are making before labelling it gibberish in ignorance.


In a computer the action is an electrical flow through a conductor, usually metallic. The electrons are herded on mass through great clunking cuircuits like lambs to the slaughter.
In the brain there is a subtle electrical interplay between complex molecules specifically selected and arranged by the living entity. The life and therefore the emergent awareness and consciousness are a subtle synthesis of molecular and electrical activity on the molecular scale. There is a living molecular connection and interplay throughout the whole body.

Yep. I've seen those electricians in cowboy boots and a stetson herding those electrons into a pen while riding those holey horses.
 
There are probably as many ontological positions as there are philosophers. Chalmers recognises that there is an issue, do you?

I think you'll find on closer examination of everyone who considers dualism of one form or another that it is used as a speculative tool, as is monism.

Each philosopher who considers dualism, when questioned on their opinions of what actually exists and what the actually existing ontology is, would I'm sure say I don't know, humanity can only speculate from its limited perspective and the actual existing ontology could be any of the ontologies considered by humanity or numerous others which humanity does not have an inkling of.

Perhaps your opposition is to people who believe in dualism. A philosopher would not do that.

Perhaps so, rather like any other musings on existence inevitably do.


Its ridiculous to insist on an absolute dualism before discussing ontologies.

That really should tell you something.
 
Ok I was presenting a caricature of computers.

My point is that the difference we are discussing may well be in the physical mechanisms being compared.

You appear to be claiming that a computer can perform the same computation which a brain can perform. This I can agree on.

You appear to be asserting that this performance being the same would result in the same experience of consciousness. This I cannot agree on and appears to be a non-sequitur. As I consider that the consciousness is generated by some other aspect of the mechanism of the brain than the product of the computation itself.
An aspect of the performance of the brain that is not performed by the computer.

This other aspect is where god hides?
 
... Data himself was constantly aware that he was in some way lacking.
Hmm, sounds like self-awareness to me.

To him the living crew members had some X factor which was entirely inaccessible to him.
Bear in mind that Data was a fictional plot device to explore the entertainment value of such philosophical musings. It is a useful model for the robotic AI concept, but it would be a mistake to take Data's existential feelings of inadequacy in the TV shows as anything more than plot devices themselves. The TV scripts were continually playing with AI paradoxes, e.g. if Data was worried about lacking some essential human feature, wasn't that insecurity itself evidence of the very human-like thought patterns he felt he lacked?, etc.

IOW it's an example that shouldn't be taken too far out of context - entertainment.
 
Hmm, sounds like self-awareness to me.


Bear in mind that Data was a fictional plot device to explore the entertainment value of such philosophical musings. It is a useful model for the robotic AI concept, but it would be a mistake to take Data's existential feelings of inadequacy in the TV shows as anything more than plot devices themselves. The TV scripts were continually playing with AI paradoxes, e.g. if Data was worried about lacking some essential human feature, wasn't that insecurity itself evidence of the very human-like thought patterns he felt he lacked?, etc.

IOW it's an example that shouldn't be taken too far out of context - entertainment.

Generally speaking, robots on TV and film are portrayed by human actors, and treated as if they were self-aware. Sometimes the writers try to have it both ways, with the droids on Star Wars being clearly self-aware, but at the same time entirely disposable.

I suspect that the universality of self-aware robots and androids in the arena of science fiction has itself influenced the discussion. Lovable Commander Data is played by lovable Brent Spiner, not by an actual robot.
 
Give me an example claim about subjective experience that this objective robot would disagree with.

I don't believe that something that didn't have subjective experience itself would be able to produce any kind of evaluation of it. I don't think that any set of rules that it could apply could determine what subjective experience is.

This is not an answer. The reason I'm asking these questions is because it lends a particular and relevant line of thought; all you're doing is hand waving. Something about your robot has to be a different tool, else you wouldn't call it an "objective robot". And if this objective robot does have the same tools we have, it would be contradictory to suppose that it would reject subjective experiences. After all, subjective experiences are not only tools we have for analyzing reality--they are the only tools.

That is why being an objective robot would be quite unlike being a human being. Human beings experience the universe. An objective robot would not. When we say that it comes to conclusions, we meant that a set of rules are applied and a conclusion reached.

The tools we use to make judgements are, in theory, capable of being applied automatically.

The questions I asked aren't meant to be general examples--they're specific and relevant questions. I have to ask them because, well, see the last paragraph.

Since you're not answering these questions, though, let me run this by you. Just object to anything you think is wrong:
  • The objective robot would claim that the penny behind the curtain is a real penny.
  • The objective robot would claim that the picture of a penny is not a real penny.
  • The objective robot would claim that the picture does not have a penny in it.
  • The objective robot would nevertheless recognize that the picture is a picture of a penny.
If this is acceptable to you, then we can derive something interesting about the objective robot. It can categorize particular images as something that is at least analogous to "looks like a penny". It can distinguish between things that look like a penny and actually are pennies from things that look like a penny and are not pennies. From this we can imagine that the robot can determine whether or not an image looks like a penny independently from its determination of whether or not an image actually is a penny.

Whether or not this objective robot has "subjective experiences" by your definition, whatever it is actually able to do by the above (if that's acceptable to you) sure sounds like the kinds of things we would describe as subjective experiences. So what exactly would it have good reason to doubt?

The robot would apply test A, which would say that it has something that looks like a penny, and test B, which would say that it had something that didn't weigh the same as a penny. If this equates to subjective experience, then everything that interacts with something else has a subjective experience of it.

I don't find this a useful or distinguishing definition. I know what subjective experience is, because I have the subjective experience of looking at a penny, or holding it in my hand. A slot machine might be able to distinguish a penny very accurately - perhaps more accurately than I can - but I don't think that the slot machine will have an experience of the penny, in the same way that I do. If it does, then so does the table the penny lies on, or the ray of sunlight that shines off it.

The objective robot experiences the penny the same way that the slot machine does, or the table. It's a conceptual aid.
 
If this equates to subjective experience, then everything that interacts with something else has a subjective experience of it.

... <snip> ...

If it does, then so does the table the penny lies on, or the ray of sunlight that shines off it.

...< snip > ...

I saw that comin
 
When we struggle to define consciousness specifically enough to make a conscious machine, we forget, dammit, that it's not one thing. It's the mess evolution gave us, and the arbitrary variations of its design and nature are daunting.

I think you have it backwards -- it is the people struggling to make a conscious machine that are more acutely aware of the variation in the architecture of the brains of conscious entities.
 
How can a technology describe exactly how you think and feel when you can't do so yourself? The language of thoughts and feelings is inherently imprecise.

Who cares about language ? What part of "the machine reads your mind and translated it into data one can read" don't you understand ?
 
I don't find this a useful or distinguishing definition. I know what subjective experience is, because I have the subjective experience of looking at a penny, or holding it in my hand. A slot machine might be able to distinguish a penny very accurately - perhaps more accurately than I can - but I don't think that the slot machine will have an experience of the penny, in the same way that I do. If it does, then so does the table the penny lies on, or the ray of sunlight that shines off it.

Yes, that's the problem with relying on subjective experience to define consciousness. Either we slip all the way down to the bottom and wind up with some hippie-dippie "everything is conscious" outlook, or we fall back on fallacious arguments from incredulity, or we throw the whole thing out and rely on objective evidence alone.
 
Yes very interesting, we have a long voyage of discovery before us.

Thank you!

I was starting to feel hurt that no one was responding with "nonsense" or "you don't know what you're talking about." :rolleyes:
 
I don't believe that something that didn't have subjective experience itself would be able to produce any kind of evaluation of it.
"Subjective experiences are not real" is a kind of evaluation of it.
I don't think that any set of rules that it could apply could determine what subjective experience is.
But subjective experiences should have a mechanism, and that mechanism should have a causal relation to your describing them with the term "subjective experience". Our robot can use this as a basis for applying the term "subjective experience" to a meaning. At an absolute worst case, it would include too many things, and this ill-defined extra that you think is critical to subjective experience would remain unknown.

But there should also be a reason that the robot can figure out, in the causal connections--at least in theory--for why you say that there is this ill-defined extra. If that reason is, in fact, based on the actual ill-defined extra, then the robot is actually in a better situation of saying what subjective experience really is than you are, even with your "cheat" of actually experiencing it. And if it is "not", then there is no such ill-defined extra.

If you see a third possibility, then please, point it out.
The tools we use to make judgements are, in theory, capable of being applied automatically.
But the tools we use to make judgments are, in practice, applied automatically.
The robot would apply test A, which would say that it has something that looks like a penny, and test B, which would say that it had something that didn't weigh the same as a penny.
You're being a very poor devil's advocate. I think I can guess the reason you're going along these lines--it's because you want to remove the analogy of "looks like a penny", because you think I'm defining subjective experience. That is, in fact, not what I'm doing--I'm playing the devil's advocate, and granting that the robot has no subjective experience, but trying to show you that even in that scenario, the robot winds up having a perfectly good theory of what subjective experience is.

What you're trying to do, I'm guessing, is to make the robot's evaluation seem to me to be that much further removed from experiencing, by associating it with things like objective measures. However, in doing so, you are ironically weakening your argument. Here's why.

Suppose your robot does analyze pennies and photographs this way. Well, we do not. We come to the conclusion fairly quickly that the penny behind the curtain is a real penny, but the penny in the photograph is just a photograph of a real penny. Now let's say that the robot hears us make this "magical" claim that we can tell real pennies from photos of pennies without weighing them--just by looking at them. Let's further suppose that the robot doubts us, and performs a simple test of our alleged capabilities using something very similar to the MDC.

Now here's the problem. We pass the test.

Now what is the robot to do with our claims of "magical" capabilities of divining real pennies from photos of pennies? At this point it must conclude that there should be some underlying mechanism that we use that allows us to make this determination, even though it does not know exactly what this mechanism is. Remember the plotting machine making a star? Just like that.

And also similar to this machine, we make a claim that we use Subjective Experience technology <TM> to perform this analysis. Well, the robot doesn't know exactly what Subjective Experience technology is, so it goes about opening our heads and figuring out how we really do it.

You can guess the rest of the plot... I keep repeating this story.
If this equates to subjective experience, then everything that interacts with something else has a subjective experience of it.
I'll grant that it doesn't equate if you note that I never said that it equates. I still reach my conclusion that the robot doesn't arrive at a claim that subjective experience doesn't exist (with the caveat that it might, but depends on the robot and the line of inquiry; nevertheless, I still maintain you should check that robot's warranty).

You're missing the point of what I'm trying to do... when I say "at least analogous to" in the latter post, I'm not trying to claim that they "are" subjective experiences in themselves. My point is about how an entity that is minimally capable of doing so associates a word to a meaning--it has to map this word to some set of invariants that applies to the way that the word is used. The analogy here is so tight that there's no reason for the robot to doubt that subjective experiences actually exist--unless you can make some specific claim that differentiates what you surmise that the robot would get "subjective experience" confused with, and what subjective experiences actually are.

And that's what I was looking for when I asked you to make a claim about subjective experiences that you think the robot would disagree with. In the latest reply, you suggested that this was probably impossible. Well, the implications of it being impossible is that the robot would think it does exist, and that it's no different than the kinds of things it thinks it might be.

And furthermore, who is to say the robot's guess is wrong? Shouldn't the subjective experiences actually correlate to something real, that is part of the causal chain, assuming they do exist?
I don't find this a useful or distinguishing definition.
Doesn't matter. The whole point is, "subjective experiences are not real" is a type of claim. As I said before, you're severely underestimating what it takes to claim this--even for "objective robots".
The objective robot experiences the penny the same way that the slot machine does, or the table. It's a conceptual aid.
As are experiences.
 
Last edited:
Yes, that's the problem with relying on subjective experience to define consciousness. Either we slip all the way down to the bottom and wind up with some hippie-dippie "everything is conscious" outlook, or we fall back on fallacious arguments from incredulity, or we throw the whole thing out and rely on objective evidence alone.

The third being the least plausible of all. It's like saying that the programmes are real but we don't believe the television exists.
 
Who cares about language ? What part of "the machine reads your mind and translated it into data one can read" don't you understand ?

A camera can do that now. We can get an inexact impression of someone else's subjective experience right now. Doing it via the brain rather than the external behaviour is not different in principle. Doing it exactly is impossible due to inherent reasons of imprecision.
 
A camera can do that now. We can get an inexact impression of someone else's subjective experience right now. Doing it via the brain rather than the external behaviour is not different in principle. Doing it exactly is impossible due to inherent reasons of imprecision.

The god of the inherent imprecision.
 
Status
Not open for further replies.

Back
Top Bottom